Aug 13 00:54:02.183748 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 00:54:02.183794 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:02.183809 kernel: BIOS-provided physical RAM map: Aug 13 00:54:02.183817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 00:54:02.183828 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 00:54:02.183839 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 00:54:02.183851 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Aug 13 00:54:02.183858 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Aug 13 00:54:02.183871 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 00:54:02.183882 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 00:54:02.183894 kernel: NX (Execute Disable) protection: active Aug 13 00:54:02.183905 kernel: SMBIOS 2.8 present. Aug 13 00:54:02.183912 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Aug 13 00:54:02.183919 kernel: Hypervisor detected: KVM Aug 13 00:54:02.183930 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 00:54:02.183947 kernel: kvm-clock: cpu 0, msr 4a19e001, primary cpu clock Aug 13 00:54:02.183960 kernel: kvm-clock: using sched offset of 3897808159 cycles Aug 13 00:54:02.183971 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 00:54:02.183990 kernel: tsc: Detected 1995.309 MHz processor Aug 13 00:54:02.184002 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:54:02.184012 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:54:02.184022 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Aug 13 00:54:02.184031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:54:02.184046 kernel: ACPI: Early table checksum verification disabled Aug 13 00:54:02.184057 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Aug 13 00:54:02.184068 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184076 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184083 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184091 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 00:54:02.184102 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184112 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184124 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184138 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:54:02.184151 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Aug 13 00:54:02.184162 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Aug 13 00:54:02.184186 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 00:54:02.184194 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Aug 13 00:54:02.184201 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Aug 13 00:54:02.184208 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Aug 13 00:54:02.184244 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Aug 13 00:54:02.184262 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:54:02.184270 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:54:02.184277 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 00:54:02.184286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 00:54:02.184294 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Aug 13 00:54:02.184302 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Aug 13 00:54:02.184313 kernel: Zone ranges: Aug 13 00:54:02.184321 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:54:02.184329 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Aug 13 00:54:02.184337 kernel: Normal empty Aug 13 00:54:02.184349 kernel: Movable zone start for each node Aug 13 00:54:02.184365 kernel: Early memory node ranges Aug 13 00:54:02.184381 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 00:54:02.184394 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Aug 13 00:54:02.184407 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Aug 13 00:54:02.184424 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:54:02.184443 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 00:54:02.184452 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Aug 13 00:54:02.184460 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 00:54:02.184468 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 00:54:02.184476 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:54:02.184484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 00:54:02.184492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 00:54:02.184501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:54:02.184512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 00:54:02.184525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 00:54:02.184533 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:54:02.184541 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 00:54:02.184549 kernel: TSC deadline timer available Aug 13 00:54:02.184557 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:54:02.184566 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Aug 13 00:54:02.184574 kernel: Booting paravirtualized kernel on KVM Aug 13 00:54:02.184582 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:54:02.184594 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:54:02.184602 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Aug 13 00:54:02.184610 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Aug 13 00:54:02.184618 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:54:02.184626 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Aug 13 00:54:02.184634 kernel: kvm-guest: PV spinlocks disabled, no host support Aug 13 00:54:02.184642 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Aug 13 00:54:02.184650 kernel: Policy zone: DMA32 Aug 13 00:54:02.184660 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:02.184671 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:54:02.184679 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:54:02.184688 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:54:02.184696 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:54:02.184704 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 123076K reserved, 0K cma-reserved) Aug 13 00:54:02.184745 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:54:02.184754 kernel: Kernel/User page tables isolation: enabled Aug 13 00:54:02.184762 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 00:54:02.184773 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 00:54:02.184781 kernel: rcu: Hierarchical RCU implementation. Aug 13 00:54:02.184791 kernel: rcu: RCU event tracing is enabled. Aug 13 00:54:02.184799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:54:02.184808 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:54:02.184816 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:54:02.184828 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:54:02.184840 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:54:02.184852 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 00:54:02.184866 kernel: random: crng init done Aug 13 00:54:02.184879 kernel: Console: colour VGA+ 80x25 Aug 13 00:54:02.184889 kernel: printk: console [tty0] enabled Aug 13 00:54:02.184897 kernel: printk: console [ttyS0] enabled Aug 13 00:54:02.184905 kernel: ACPI: Core revision 20210730 Aug 13 00:54:02.184914 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 00:54:02.184922 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:54:02.184931 kernel: x2apic enabled Aug 13 00:54:02.184939 kernel: Switched APIC routing to physical x2apic. Aug 13 00:54:02.184948 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 00:54:02.184959 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985bd6d44e, max_idle_ns: 881590467931 ns Aug 13 00:54:02.184967 kernel: Calibrating delay loop (skipped) preset value.. 3990.61 BogoMIPS (lpj=1995309) Aug 13 00:54:02.184987 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 00:54:02.184996 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 00:54:02.185004 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:54:02.185012 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:54:02.185020 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:54:02.185029 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 00:54:02.185041 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 00:54:02.185059 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 00:54:02.185069 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 00:54:02.185081 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:54:02.185090 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:54:02.185100 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:54:02.185109 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:54:02.185118 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:54:02.185128 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:54:02.185138 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 00:54:02.185150 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:54:02.185160 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:54:02.185188 kernel: LSM: Security Framework initializing Aug 13 00:54:02.188273 kernel: SELinux: Initializing. Aug 13 00:54:02.188302 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:54:02.188311 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 00:54:02.188320 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Aug 13 00:54:02.188340 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Aug 13 00:54:02.188349 kernel: signal: max sigframe size: 1776 Aug 13 00:54:02.188359 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:54:02.188368 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:54:02.188376 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:54:02.188385 kernel: x86: Booting SMP configuration: Aug 13 00:54:02.188394 kernel: .... node #0, CPUs: #1 Aug 13 00:54:02.188403 kernel: kvm-clock: cpu 1, msr 4a19e041, secondary cpu clock Aug 13 00:54:02.188412 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Aug 13 00:54:02.188424 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:54:02.188433 kernel: smpboot: Max logical packages: 1 Aug 13 00:54:02.188442 kernel: smpboot: Total of 2 processors activated (7981.23 BogoMIPS) Aug 13 00:54:02.188450 kernel: devtmpfs: initialized Aug 13 00:54:02.188462 kernel: x86/mm: Memory block size: 128MB Aug 13 00:54:02.188476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:54:02.188488 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:54:02.188501 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:54:02.188514 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:54:02.188530 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:54:02.188543 kernel: audit: type=2000 audit(1755046441.026:1): state=initialized audit_enabled=0 res=1 Aug 13 00:54:02.188557 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:54:02.188571 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:54:02.188586 kernel: cpuidle: using governor menu Aug 13 00:54:02.188599 kernel: ACPI: bus type PCI registered Aug 13 00:54:02.188629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:54:02.188643 kernel: dca service started, version 1.12.1 Aug 13 00:54:02.188657 kernel: PCI: Using configuration type 1 for base access Aug 13 00:54:02.188674 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:54:02.188688 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:54:02.188702 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:54:02.188716 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:54:02.188728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:54:02.188742 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 00:54:02.188755 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 00:54:02.188769 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 00:54:02.188780 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:54:02.188794 kernel: ACPI: Interpreter enabled Aug 13 00:54:02.188807 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:54:02.188822 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:54:02.188837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:54:02.188851 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Aug 13 00:54:02.188865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:54:02.189201 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:54:02.189344 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Aug 13 00:54:02.189364 kernel: acpiphp: Slot [3] registered Aug 13 00:54:02.189373 kernel: acpiphp: Slot [4] registered Aug 13 00:54:02.189382 kernel: acpiphp: Slot [5] registered Aug 13 00:54:02.189391 kernel: acpiphp: Slot [6] registered Aug 13 00:54:02.189400 kernel: acpiphp: Slot [7] registered Aug 13 00:54:02.189409 kernel: acpiphp: Slot [8] registered Aug 13 00:54:02.189418 kernel: acpiphp: Slot [9] registered Aug 13 00:54:02.189427 kernel: acpiphp: Slot [10] registered Aug 13 00:54:02.189436 kernel: acpiphp: Slot [11] registered Aug 13 00:54:02.189448 kernel: acpiphp: Slot [12] registered Aug 13 00:54:02.189457 kernel: acpiphp: Slot [13] registered Aug 13 00:54:02.189465 kernel: acpiphp: Slot [14] registered Aug 13 00:54:02.189475 kernel: acpiphp: Slot [15] registered Aug 13 00:54:02.189483 kernel: acpiphp: Slot [16] registered Aug 13 00:54:02.189492 kernel: acpiphp: Slot [17] registered Aug 13 00:54:02.189501 kernel: acpiphp: Slot [18] registered Aug 13 00:54:02.189509 kernel: acpiphp: Slot [19] registered Aug 13 00:54:02.189518 kernel: acpiphp: Slot [20] registered Aug 13 00:54:02.189529 kernel: acpiphp: Slot [21] registered Aug 13 00:54:02.189538 kernel: acpiphp: Slot [22] registered Aug 13 00:54:02.189547 kernel: acpiphp: Slot [23] registered Aug 13 00:54:02.189555 kernel: acpiphp: Slot [24] registered Aug 13 00:54:02.189564 kernel: acpiphp: Slot [25] registered Aug 13 00:54:02.189575 kernel: acpiphp: Slot [26] registered Aug 13 00:54:02.189587 kernel: acpiphp: Slot [27] registered Aug 13 00:54:02.189600 kernel: acpiphp: Slot [28] registered Aug 13 00:54:02.189610 kernel: acpiphp: Slot [29] registered Aug 13 00:54:02.189619 kernel: acpiphp: Slot [30] registered Aug 13 00:54:02.189631 kernel: acpiphp: Slot [31] registered Aug 13 00:54:02.189640 kernel: PCI host bridge to bus 0000:00 Aug 13 00:54:02.189825 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 00:54:02.189929 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 00:54:02.190053 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 00:54:02.190153 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 13 00:54:02.190268 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Aug 13 00:54:02.190365 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:54:02.190497 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 13 00:54:02.190619 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 13 00:54:02.190766 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Aug 13 00:54:02.190958 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Aug 13 00:54:02.191082 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 00:54:02.191249 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 00:54:02.191420 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 00:54:02.191554 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 00:54:02.191717 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Aug 13 00:54:02.191845 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Aug 13 00:54:02.192042 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 13 00:54:02.195042 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Aug 13 00:54:02.195281 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Aug 13 00:54:02.195409 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Aug 13 00:54:02.195549 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Aug 13 00:54:02.195680 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Aug 13 00:54:02.195811 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Aug 13 00:54:02.195905 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 13 00:54:02.196027 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 00:54:02.196208 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:54:02.196308 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Aug 13 00:54:02.196420 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Aug 13 00:54:02.196513 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Aug 13 00:54:02.196653 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 13 00:54:02.196750 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Aug 13 00:54:02.196872 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Aug 13 00:54:02.197008 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Aug 13 00:54:02.199257 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Aug 13 00:54:02.199493 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Aug 13 00:54:02.199606 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Aug 13 00:54:02.199744 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Aug 13 00:54:02.199886 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:54:02.200001 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 00:54:02.200140 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Aug 13 00:54:02.200267 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Aug 13 00:54:02.200385 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Aug 13 00:54:02.200545 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Aug 13 00:54:02.200652 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Aug 13 00:54:02.200779 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Aug 13 00:54:02.200914 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Aug 13 00:54:02.201021 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Aug 13 00:54:02.201124 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Aug 13 00:54:02.201135 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 00:54:02.201145 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 00:54:02.201154 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 00:54:02.201163 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 00:54:02.203974 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 13 00:54:02.204024 kernel: iommu: Default domain type: Translated Aug 13 00:54:02.204035 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:54:02.204415 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Aug 13 00:54:02.204555 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 00:54:02.204696 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Aug 13 00:54:02.204714 kernel: vgaarb: loaded Aug 13 00:54:02.204727 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:54:02.204736 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:54:02.204803 kernel: PTP clock support registered Aug 13 00:54:02.204815 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:54:02.204831 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 00:54:02.204844 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 00:54:02.204856 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Aug 13 00:54:02.204868 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 00:54:02.204881 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 00:54:02.204895 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 00:54:02.204923 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:54:02.204933 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:54:02.204942 kernel: pnp: PnP ACPI init Aug 13 00:54:02.204951 kernel: pnp: PnP ACPI: found 4 devices Aug 13 00:54:02.204960 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:54:02.204969 kernel: NET: Registered PF_INET protocol family Aug 13 00:54:02.204978 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:54:02.204988 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 00:54:02.204997 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:54:02.205009 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:54:02.205018 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 00:54:02.205027 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 00:54:02.205035 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:54:02.205044 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 00:54:02.205053 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:54:02.205061 kernel: NET: Registered PF_XDP protocol family Aug 13 00:54:02.205189 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 00:54:02.205277 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 00:54:02.205364 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 00:54:02.205446 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 13 00:54:02.205543 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Aug 13 00:54:02.205646 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Aug 13 00:54:02.205745 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 00:54:02.205862 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Aug 13 00:54:02.205882 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 13 00:54:02.206080 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 44084 usecs Aug 13 00:54:02.206108 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:54:02.206117 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:54:02.206127 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985bd6d44e, max_idle_ns: 881590467931 ns Aug 13 00:54:02.206136 kernel: Initialise system trusted keyrings Aug 13 00:54:02.206145 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 00:54:02.206154 kernel: Key type asymmetric registered Aug 13 00:54:02.206163 kernel: Asymmetric key parser 'x509' registered Aug 13 00:54:02.206195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 00:54:02.206204 kernel: io scheduler mq-deadline registered Aug 13 00:54:02.206216 kernel: io scheduler kyber registered Aug 13 00:54:02.206228 kernel: io scheduler bfq registered Aug 13 00:54:02.206244 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:54:02.206257 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Aug 13 00:54:02.206269 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 13 00:54:02.206282 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 13 00:54:02.206295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:54:02.206307 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:54:02.206316 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 00:54:02.206328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 00:54:02.206337 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 00:54:02.206347 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 00:54:02.206533 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 00:54:02.206672 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 00:54:02.206808 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T00:54:01 UTC (1755046441) Aug 13 00:54:02.206942 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 00:54:02.206966 kernel: intel_pstate: CPU model not supported Aug 13 00:54:02.206981 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:54:02.206994 kernel: Segment Routing with IPv6 Aug 13 00:54:02.207008 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:54:02.207022 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:54:02.207035 kernel: Key type dns_resolver registered Aug 13 00:54:02.207049 kernel: IPI shorthand broadcast: enabled Aug 13 00:54:02.207063 kernel: sched_clock: Marking stable (835149446, 172075792)->(1207414271, -200189033) Aug 13 00:54:02.207076 kernel: registered taskstats version 1 Aug 13 00:54:02.207090 kernel: Loading compiled-in X.509 certificates Aug 13 00:54:02.207107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 00:54:02.207121 kernel: Key type .fscrypt registered Aug 13 00:54:02.207135 kernel: Key type fscrypt-provisioning registered Aug 13 00:54:02.207149 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:54:02.207162 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:54:02.210298 kernel: ima: No architecture policies found Aug 13 00:54:02.210317 kernel: clk: Disabling unused clocks Aug 13 00:54:02.210327 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 00:54:02.210343 kernel: Write protecting the kernel read-only data: 28672k Aug 13 00:54:02.210352 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 00:54:02.210361 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 00:54:02.210370 kernel: Run /init as init process Aug 13 00:54:02.210379 kernel: with arguments: Aug 13 00:54:02.210388 kernel: /init Aug 13 00:54:02.210416 kernel: with environment: Aug 13 00:54:02.210428 kernel: HOME=/ Aug 13 00:54:02.210436 kernel: TERM=linux Aug 13 00:54:02.210448 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:54:02.210463 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:54:02.210476 systemd[1]: Detected virtualization kvm. Aug 13 00:54:02.210486 systemd[1]: Detected architecture x86-64. Aug 13 00:54:02.210495 systemd[1]: Running in initrd. Aug 13 00:54:02.210505 systemd[1]: No hostname configured, using default hostname. Aug 13 00:54:02.210515 systemd[1]: Hostname set to . Aug 13 00:54:02.210527 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:54:02.210537 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:54:02.210546 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:54:02.210555 systemd[1]: Reached target cryptsetup.target. Aug 13 00:54:02.210565 systemd[1]: Reached target paths.target. Aug 13 00:54:02.210574 systemd[1]: Reached target slices.target. Aug 13 00:54:02.210583 systemd[1]: Reached target swap.target. Aug 13 00:54:02.210593 systemd[1]: Reached target timers.target. Aug 13 00:54:02.210605 systemd[1]: Listening on iscsid.socket. Aug 13 00:54:02.210614 systemd[1]: Listening on iscsiuio.socket. Aug 13 00:54:02.210624 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 00:54:02.210633 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 00:54:02.210642 systemd[1]: Listening on systemd-journald.socket. Aug 13 00:54:02.210652 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:54:02.210664 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:54:02.210674 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:54:02.210683 systemd[1]: Reached target sockets.target. Aug 13 00:54:02.210694 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:54:02.210704 systemd[1]: Finished network-cleanup.service. Aug 13 00:54:02.210716 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:54:02.210725 systemd[1]: Starting systemd-journald.service... Aug 13 00:54:02.210734 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:54:02.210746 systemd[1]: Starting systemd-resolved.service... Aug 13 00:54:02.210755 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 00:54:02.210764 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:54:02.210774 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:54:02.210791 systemd-journald[184]: Journal started Aug 13 00:54:02.210889 systemd-journald[184]: Runtime Journal (/run/log/journal/882e35e7625e4855a2ad0c2361991e74) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:54:02.197737 systemd-modules-load[185]: Inserted module 'overlay' Aug 13 00:54:02.270163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:54:02.270250 kernel: Bridge firewalling registered Aug 13 00:54:02.244237 systemd-resolved[186]: Positive Trust Anchors: Aug 13 00:54:02.277302 systemd[1]: Started systemd-journald.service. Aug 13 00:54:02.277351 kernel: audit: type=1130 audit(1755046442.270:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.244252 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:54:02.290358 kernel: audit: type=1130 audit(1755046442.277:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.290403 kernel: SCSI subsystem initialized Aug 13 00:54:02.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.244301 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:54:02.299355 kernel: audit: type=1130 audit(1755046442.284:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.256097 systemd-resolved[186]: Defaulting to hostname 'linux'. Aug 13 00:54:02.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.257011 systemd-modules-load[185]: Inserted module 'br_netfilter' Aug 13 00:54:02.307878 kernel: audit: type=1130 audit(1755046442.285:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.278434 systemd[1]: Started systemd-resolved.service. Aug 13 00:54:02.284998 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 00:54:02.285918 systemd[1]: Reached target nss-lookup.target. Aug 13 00:54:02.287745 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 00:54:02.298827 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:54:02.316368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:54:02.331792 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:54:02.331871 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:54:02.331891 kernel: audit: type=1130 audit(1755046442.317:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.331911 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 00:54:02.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.333813 systemd-modules-load[185]: Inserted module 'dm_multipath' Aug 13 00:54:02.335160 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:54:02.359576 kernel: audit: type=1130 audit(1755046442.335:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.359622 kernel: audit: type=1130 audit(1755046442.345:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.337407 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:02.345415 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 00:54:02.351309 systemd[1]: Starting dracut-cmdline.service... Aug 13 00:54:02.366081 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:02.367023 dracut-cmdline[206]: dracut-dracut-053 Aug 13 00:54:02.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.372369 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 00:54:02.375902 kernel: audit: type=1130 audit(1755046442.368:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.473275 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:54:02.502589 kernel: iscsi: registered transport (tcp) Aug 13 00:54:02.540897 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:54:02.541010 kernel: QLogic iSCSI HBA Driver Aug 13 00:54:02.614608 systemd[1]: Finished dracut-cmdline.service. Aug 13 00:54:02.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.616764 systemd[1]: Starting dracut-pre-udev.service... Aug 13 00:54:02.621504 kernel: audit: type=1130 audit(1755046442.615:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:02.682244 kernel: raid6: avx2x4 gen() 24792 MB/s Aug 13 00:54:02.699224 kernel: raid6: avx2x4 xor() 9165 MB/s Aug 13 00:54:02.716225 kernel: raid6: avx2x2 gen() 26872 MB/s Aug 13 00:54:02.734239 kernel: raid6: avx2x2 xor() 10554 MB/s Aug 13 00:54:02.752223 kernel: raid6: avx2x1 gen() 24183 MB/s Aug 13 00:54:02.768234 kernel: raid6: avx2x1 xor() 13052 MB/s Aug 13 00:54:02.786209 kernel: raid6: sse2x4 gen() 11160 MB/s Aug 13 00:54:02.803221 kernel: raid6: sse2x4 xor() 5348 MB/s Aug 13 00:54:02.821253 kernel: raid6: sse2x2 gen() 8683 MB/s Aug 13 00:54:02.839237 kernel: raid6: sse2x2 xor() 5954 MB/s Aug 13 00:54:02.857259 kernel: raid6: sse2x1 gen() 6540 MB/s Aug 13 00:54:02.875450 kernel: raid6: sse2x1 xor() 4442 MB/s Aug 13 00:54:02.875584 kernel: raid6: using algorithm avx2x2 gen() 26872 MB/s Aug 13 00:54:02.875602 kernel: raid6: .... xor() 10554 MB/s, rmw enabled Aug 13 00:54:02.876539 kernel: raid6: using avx2x2 recovery algorithm Aug 13 00:54:02.895252 kernel: xor: automatically using best checksumming function avx Aug 13 00:54:03.038248 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 00:54:03.054153 systemd[1]: Finished dracut-pre-udev.service. Aug 13 00:54:03.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:03.055000 audit: BPF prog-id=7 op=LOAD Aug 13 00:54:03.055000 audit: BPF prog-id=8 op=LOAD Aug 13 00:54:03.056038 systemd[1]: Starting systemd-udevd.service... Aug 13 00:54:03.076648 systemd-udevd[384]: Using default interface naming scheme 'v252'. Aug 13 00:54:03.083273 systemd[1]: Started systemd-udevd.service. Aug 13 00:54:03.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:03.088911 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 00:54:03.110429 dracut-pre-trigger[396]: rd.md=0: removing MD RAID activation Aug 13 00:54:03.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:03.167843 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 00:54:03.171437 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:54:03.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:03.235667 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:54:03.341222 kernel: scsi host0: Virtio SCSI HBA Aug 13 00:54:03.348121 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 00:54:03.421058 kernel: libata version 3.00 loaded. Aug 13 00:54:03.421101 kernel: ata_piix 0000:00:01.1: version 2.13 Aug 13 00:54:03.421370 kernel: ACPI: bus type USB registered Aug 13 00:54:03.421391 kernel: usbcore: registered new interface driver usbfs Aug 13 00:54:03.421409 kernel: scsi host1: ata_piix Aug 13 00:54:03.421607 kernel: usbcore: registered new interface driver hub Aug 13 00:54:03.421636 kernel: usbcore: registered new device driver usb Aug 13 00:54:03.421653 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:54:03.421670 kernel: scsi host2: ata_piix Aug 13 00:54:03.421836 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Aug 13 00:54:03.421855 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Aug 13 00:54:03.421872 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 13 00:54:03.421889 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:54:03.421907 kernel: GPT:9289727 != 125829119 Aug 13 00:54:03.421924 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:54:03.421945 kernel: GPT:9289727 != 125829119 Aug 13 00:54:03.421963 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:54:03.421980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:03.430431 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Aug 13 00:54:03.540253 kernel: ehci-pci: EHCI PCI platform driver Aug 13 00:54:03.566316 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:54:03.566409 kernel: AES CTR mode by8 optimization enabled Aug 13 00:54:03.593632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 00:54:03.604227 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (442) Aug 13 00:54:03.604451 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 00:54:03.609205 kernel: uhci_hcd: USB Universal Host Controller Interface driver Aug 13 00:54:03.619082 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 00:54:03.621218 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 00:54:03.627821 systemd[1]: Starting disk-uuid.service... Aug 13 00:54:03.637929 disk-uuid[476]: Primary Header is updated. Aug 13 00:54:03.637929 disk-uuid[476]: Secondary Entries is updated. Aug 13 00:54:03.637929 disk-uuid[476]: Secondary Header is updated. Aug 13 00:54:03.658460 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:54:03.711641 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Aug 13 00:54:03.722160 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Aug 13 00:54:03.722367 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Aug 13 00:54:03.722503 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Aug 13 00:54:03.722657 kernel: hub 1-0:1.0: USB hub found Aug 13 00:54:03.722815 kernel: hub 1-0:1.0: 2 ports detected Aug 13 00:54:04.648625 disk-uuid[477]: The operation has completed successfully. Aug 13 00:54:04.649608 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:54:04.706478 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:54:04.706641 systemd[1]: Finished disk-uuid.service. Aug 13 00:54:04.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:04.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:04.714787 systemd[1]: Starting verity-setup.service... Aug 13 00:54:04.738607 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:54:04.797322 systemd[1]: Found device dev-mapper-usr.device. Aug 13 00:54:04.800387 systemd[1]: Mounting sysusr-usr.mount... Aug 13 00:54:04.802548 systemd[1]: Finished verity-setup.service. Aug 13 00:54:04.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:04.915233 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 00:54:04.916038 systemd[1]: Mounted sysusr-usr.mount. Aug 13 00:54:04.916877 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Aug 13 00:54:04.917937 systemd[1]: Starting ignition-setup.service... Aug 13 00:54:04.920131 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 00:54:04.941230 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:54:04.941305 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:54:04.941318 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:54:04.968881 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:54:04.976938 systemd[1]: Finished ignition-setup.service. Aug 13 00:54:04.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:04.979440 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 00:54:05.101822 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 00:54:05.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.103000 audit: BPF prog-id=9 op=LOAD Aug 13 00:54:05.105376 systemd[1]: Starting systemd-networkd.service... Aug 13 00:54:05.139562 ignition[616]: Ignition 2.14.0 Aug 13 00:54:05.140584 ignition[616]: Stage: fetch-offline Aug 13 00:54:05.141421 ignition[616]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.142458 ignition[616]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.143977 systemd-networkd[688]: lo: Link UP Aug 13 00:54:05.143992 systemd-networkd[688]: lo: Gained carrier Aug 13 00:54:05.145658 systemd-networkd[688]: Enumeration completed Aug 13 00:54:05.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.145846 systemd[1]: Started systemd-networkd.service. Aug 13 00:54:05.146526 systemd[1]: Reached target network.target. Aug 13 00:54:05.146765 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:54:05.148454 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Aug 13 00:54:05.149936 systemd[1]: Starting iscsiuio.service... Aug 13 00:54:05.161020 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.161193 ignition[616]: parsed url from cmdline: "" Aug 13 00:54:05.162328 systemd-networkd[688]: eth1: Link UP Aug 13 00:54:05.161200 ignition[616]: no config URL provided Aug 13 00:54:05.162333 systemd-networkd[688]: eth1: Gained carrier Aug 13 00:54:05.161211 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:54:05.162865 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 00:54:05.161230 ignition[616]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:54:05.165419 systemd[1]: Starting ignition-fetch.service... Aug 13 00:54:05.161239 ignition[616]: failed to fetch config: resource requires networking Aug 13 00:54:05.161411 ignition[616]: Ignition finished successfully Aug 13 00:54:05.172403 systemd-networkd[688]: eth0: Link UP Aug 13 00:54:05.172410 systemd-networkd[688]: eth0: Gained carrier Aug 13 00:54:05.193524 ignition[693]: Ignition 2.14.0 Aug 13 00:54:05.194796 ignition[693]: Stage: fetch Aug 13 00:54:05.195812 ignition[693]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.196907 ignition[693]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.200458 systemd[1]: Started iscsiuio.service. Aug 13 00:54:05.202850 systemd[1]: Starting iscsid.service... Aug 13 00:54:05.203581 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.203766 ignition[693]: parsed url from cmdline: "" Aug 13 00:54:05.203772 ignition[693]: no config URL provided Aug 13 00:54:05.203782 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:54:05.203797 ignition[693]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:54:05.203839 ignition[693]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Aug 13 00:54:05.211359 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.27/20 acquired from 169.254.169.253 Aug 13 00:54:05.212889 systemd-networkd[688]: eth0: DHCPv4 address 143.198.229.35/20, gateway 143.198.224.1 acquired from 169.254.169.253 Aug 13 00:54:05.218880 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:54:05.218880 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 00:54:05.218880 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 00:54:05.218880 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 00:54:05.218880 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 00:54:05.218880 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 00:54:05.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.221480 systemd[1]: Started iscsid.service. Aug 13 00:54:05.226564 systemd[1]: Starting dracut-initqueue.service... Aug 13 00:54:05.234340 ignition[693]: GET result: OK Aug 13 00:54:05.234589 ignition[693]: parsing config with SHA512: 1e0d24048972e59d6b72bf3d0616ff511c4fd2f707225402c0d7c45baef8a9cf3c0a83e6a15c831ff63a1ab1a207bfe89c5ee5b071da3c8ab4af7400d7c1d729 Aug 13 00:54:05.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.250159 systemd[1]: Finished dracut-initqueue.service. Aug 13 00:54:05.251038 systemd[1]: Reached target remote-fs-pre.target. Aug 13 00:54:05.251700 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:54:05.252340 systemd[1]: Reached target remote-fs.target. Aug 13 00:54:05.255151 unknown[693]: fetched base config from "system" Aug 13 00:54:05.256106 ignition[693]: fetch: fetch complete Aug 13 00:54:05.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.255186 unknown[693]: fetched base config from "system" Aug 13 00:54:05.256116 ignition[693]: fetch: fetch passed Aug 13 00:54:05.255196 unknown[693]: fetched user config from "digitalocean" Aug 13 00:54:05.256198 ignition[693]: Ignition finished successfully Aug 13 00:54:05.258521 systemd[1]: Starting dracut-pre-mount.service... Aug 13 00:54:05.261262 systemd[1]: Finished ignition-fetch.service. Aug 13 00:54:05.263497 systemd[1]: Starting ignition-kargs.service... Aug 13 00:54:05.279477 systemd[1]: Finished dracut-pre-mount.service. Aug 13 00:54:05.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.281632 ignition[710]: Ignition 2.14.0 Aug 13 00:54:05.281650 ignition[710]: Stage: kargs Aug 13 00:54:05.281835 ignition[710]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.281865 ignition[710]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.285075 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.293537 ignition[710]: kargs: kargs passed Aug 13 00:54:05.293655 ignition[710]: Ignition finished successfully Aug 13 00:54:05.296603 systemd[1]: Finished ignition-kargs.service. Aug 13 00:54:05.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.298733 systemd[1]: Starting ignition-disks.service... Aug 13 00:54:05.316566 ignition[720]: Ignition 2.14.0 Aug 13 00:54:05.316579 ignition[720]: Stage: disks Aug 13 00:54:05.316801 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.316830 ignition[720]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.319897 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.322525 ignition[720]: disks: disks passed Aug 13 00:54:05.322613 ignition[720]: Ignition finished successfully Aug 13 00:54:05.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.323860 systemd[1]: Finished ignition-disks.service. Aug 13 00:54:05.324652 systemd[1]: Reached target initrd-root-device.target. Aug 13 00:54:05.325771 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:54:05.327035 systemd[1]: Reached target local-fs.target. Aug 13 00:54:05.328247 systemd[1]: Reached target sysinit.target. Aug 13 00:54:05.329389 systemd[1]: Reached target basic.target. Aug 13 00:54:05.332040 systemd[1]: Starting systemd-fsck-root.service... Aug 13 00:54:05.353315 systemd-fsck[728]: ROOT: clean, 629/553520 files, 56027/553472 blocks Aug 13 00:54:05.360822 systemd[1]: Finished systemd-fsck-root.service. Aug 13 00:54:05.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.363455 systemd[1]: Mounting sysroot.mount... Aug 13 00:54:05.375197 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 00:54:05.376813 systemd[1]: Mounted sysroot.mount. Aug 13 00:54:05.377613 systemd[1]: Reached target initrd-root-fs.target. Aug 13 00:54:05.380720 systemd[1]: Mounting sysroot-usr.mount... Aug 13 00:54:05.384067 systemd[1]: Starting flatcar-digitalocean-network.service... Aug 13 00:54:05.390487 systemd[1]: Starting flatcar-metadata-hostname.service... Aug 13 00:54:05.391676 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:54:05.391717 systemd[1]: Reached target ignition-diskful.target. Aug 13 00:54:05.394569 systemd[1]: Mounted sysroot-usr.mount. Aug 13 00:54:05.399011 systemd[1]: Starting initrd-setup-root.service... Aug 13 00:54:05.409350 initrd-setup-root[740]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:54:05.434540 initrd-setup-root[748]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:54:05.445940 initrd-setup-root[756]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:54:05.462914 initrd-setup-root[766]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:54:05.552433 coreos-metadata[734]: Aug 13 00:54:05.552 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:54:05.555587 systemd[1]: Finished initrd-setup-root.service. Aug 13 00:54:05.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.557789 systemd[1]: Starting ignition-mount.service... Aug 13 00:54:05.559691 systemd[1]: Starting sysroot-boot.service... Aug 13 00:54:05.574528 coreos-metadata[734]: Aug 13 00:54:05.574 INFO Fetch successful Aug 13 00:54:05.576561 coreos-metadata[735]: Aug 13 00:54:05.576 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:54:05.583409 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Aug 13 00:54:05.583568 systemd[1]: Finished flatcar-digitalocean-network.service. Aug 13 00:54:05.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.585831 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 00:54:05.591558 coreos-metadata[735]: Aug 13 00:54:05.591 INFO Fetch successful Aug 13 00:54:05.605085 coreos-metadata[735]: Aug 13 00:54:05.604 INFO wrote hostname ci-3510.3.8-f-585a890caa to /sysroot/etc/hostname Aug 13 00:54:05.606272 systemd[1]: Finished sysroot-boot.service. Aug 13 00:54:05.607345 systemd[1]: Finished flatcar-metadata-hostname.service. Aug 13 00:54:05.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.615209 ignition[787]: INFO : Ignition 2.14.0 Aug 13 00:54:05.615209 ignition[787]: INFO : Stage: mount Aug 13 00:54:05.615209 ignition[787]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.615209 ignition[787]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.622064 ignition[787]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.622064 ignition[787]: INFO : mount: mount passed Aug 13 00:54:05.622064 ignition[787]: INFO : Ignition finished successfully Aug 13 00:54:05.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:05.621916 systemd[1]: Finished ignition-mount.service. Aug 13 00:54:05.824012 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 00:54:05.835300 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (795) Aug 13 00:54:05.839368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:54:05.839455 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:54:05.839467 kernel: BTRFS info (device vda6): has skinny extents Aug 13 00:54:05.846776 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 00:54:05.849550 systemd[1]: Starting ignition-files.service... Aug 13 00:54:05.884469 ignition[815]: INFO : Ignition 2.14.0 Aug 13 00:54:05.884469 ignition[815]: INFO : Stage: files Aug 13 00:54:05.886568 ignition[815]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:05.886568 ignition[815]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:05.888723 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:05.890394 ignition[815]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:54:05.891806 ignition[815]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:54:05.891806 ignition[815]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:54:05.897111 ignition[815]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:54:05.899595 ignition[815]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:54:05.900844 ignition[815]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:54:05.899620 unknown[815]: wrote ssh authorized keys file for user: core Aug 13 00:54:05.907342 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:54:05.907342 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 00:54:05.956044 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:54:06.089351 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 00:54:06.091235 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:54:06.091235 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:54:06.320759 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:54:06.327614 systemd-networkd[688]: eth0: Gained IPv6LL Aug 13 00:54:06.472736 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:54:06.472736 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:54:06.475474 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 00:54:07.039793 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:54:07.159332 systemd-networkd[688]: eth1: Gained IPv6LL Aug 13 00:54:07.571662 ignition[815]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 00:54:07.571662 ignition[815]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:54:07.571662 ignition[815]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Aug 13 00:54:07.571662 ignition[815]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:54:07.576730 ignition[815]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:54:07.585615 ignition[815]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:54:07.585615 ignition[815]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:54:07.585615 ignition[815]: INFO : files: files passed Aug 13 00:54:07.585615 ignition[815]: INFO : Ignition finished successfully Aug 13 00:54:07.596381 kernel: kauditd_printk_skb: 28 callbacks suppressed Aug 13 00:54:07.596423 kernel: audit: type=1130 audit(1755046447.589:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.588134 systemd[1]: Finished ignition-files.service. Aug 13 00:54:07.591199 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 00:54:07.598978 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 00:54:07.600428 systemd[1]: Starting ignition-quench.service... Aug 13 00:54:07.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.606314 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:54:07.617752 kernel: audit: type=1130 audit(1755046447.607:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.617785 kernel: audit: type=1131 audit(1755046447.607:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.617853 initrd-setup-root-after-ignition[840]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:54:07.606473 systemd[1]: Finished ignition-quench.service. Aug 13 00:54:07.626673 kernel: audit: type=1130 audit(1755046447.619:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.618606 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 00:54:07.619958 systemd[1]: Reached target ignition-complete.target. Aug 13 00:54:07.628295 systemd[1]: Starting initrd-parse-etc.service... Aug 13 00:54:07.652426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:54:07.652553 systemd[1]: Finished initrd-parse-etc.service. Aug 13 00:54:07.662857 kernel: audit: type=1130 audit(1755046447.653:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.662899 kernel: audit: type=1131 audit(1755046447.653:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.653955 systemd[1]: Reached target initrd-fs.target. Aug 13 00:54:07.663821 systemd[1]: Reached target initrd.target. Aug 13 00:54:07.665443 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 00:54:07.667076 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 00:54:07.706268 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 00:54:07.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.708193 systemd[1]: Starting initrd-cleanup.service... Aug 13 00:54:07.712910 kernel: audit: type=1130 audit(1755046447.706:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.720433 systemd[1]: Stopped target nss-lookup.target. Aug 13 00:54:07.721880 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 00:54:07.723303 systemd[1]: Stopped target timers.target. Aug 13 00:54:07.724539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:54:07.725447 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 00:54:07.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.727201 systemd[1]: Stopped target initrd.target. Aug 13 00:54:07.739259 kernel: audit: type=1131 audit(1755046447.726:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.738843 systemd[1]: Stopped target basic.target. Aug 13 00:54:07.739886 systemd[1]: Stopped target ignition-complete.target. Aug 13 00:54:07.741126 systemd[1]: Stopped target ignition-diskful.target. Aug 13 00:54:07.742459 systemd[1]: Stopped target initrd-root-device.target. Aug 13 00:54:07.743669 systemd[1]: Stopped target remote-fs.target. Aug 13 00:54:07.744981 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 00:54:07.746500 systemd[1]: Stopped target sysinit.target. Aug 13 00:54:07.747590 systemd[1]: Stopped target local-fs.target. Aug 13 00:54:07.748654 systemd[1]: Stopped target local-fs-pre.target. Aug 13 00:54:07.749793 systemd[1]: Stopped target swap.target. Aug 13 00:54:07.751027 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:54:07.756530 kernel: audit: type=1131 audit(1755046447.752:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.751299 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 00:54:07.752403 systemd[1]: Stopped target cryptsetup.target. Aug 13 00:54:07.757347 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:54:07.764280 kernel: audit: type=1131 audit(1755046447.759:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.757640 systemd[1]: Stopped dracut-initqueue.service. Aug 13 00:54:07.759565 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:54:07.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.759811 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 00:54:07.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.765785 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:54:07.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.766586 systemd[1]: Stopped ignition-files.service. Aug 13 00:54:07.767493 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:54:07.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.767813 systemd[1]: Stopped flatcar-metadata-hostname.service. Aug 13 00:54:07.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.791288 iscsid[699]: iscsid shutting down. Aug 13 00:54:07.769985 systemd[1]: Stopping ignition-mount.service... Aug 13 00:54:07.795843 ignition[853]: INFO : Ignition 2.14.0 Aug 13 00:54:07.795843 ignition[853]: INFO : Stage: umount Aug 13 00:54:07.795843 ignition[853]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 00:54:07.795843 ignition[853]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Aug 13 00:54:07.795843 ignition[853]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Aug 13 00:54:07.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.771120 systemd[1]: Stopping iscsid.service... Aug 13 00:54:07.807466 ignition[853]: INFO : umount: umount passed Aug 13 00:54:07.807466 ignition[853]: INFO : Ignition finished successfully Aug 13 00:54:07.780105 systemd[1]: Stopping sysroot-boot.service... Aug 13 00:54:07.780666 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:54:07.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.780827 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 00:54:07.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.781530 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:54:07.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.781633 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 00:54:07.786956 systemd[1]: iscsid.service: Deactivated successfully. Aug 13 00:54:07.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.790493 systemd[1]: Stopped iscsid.service. Aug 13 00:54:07.793688 systemd[1]: Stopping iscsiuio.service... Aug 13 00:54:07.794920 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:54:07.796242 systemd[1]: Finished initrd-cleanup.service. Aug 13 00:54:07.798748 systemd[1]: iscsiuio.service: Deactivated successfully. Aug 13 00:54:07.798935 systemd[1]: Stopped iscsiuio.service. Aug 13 00:54:07.802092 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:54:07.802292 systemd[1]: Stopped ignition-mount.service. Aug 13 00:54:07.811092 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:54:07.811167 systemd[1]: Stopped ignition-disks.service. Aug 13 00:54:07.812478 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:54:07.812559 systemd[1]: Stopped ignition-kargs.service. Aug 13 00:54:07.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.816475 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:54:07.816542 systemd[1]: Stopped ignition-fetch.service. Aug 13 00:54:07.819382 systemd[1]: Stopped target network.target. Aug 13 00:54:07.820471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:54:07.820587 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 00:54:07.827321 systemd[1]: Stopped target paths.target. Aug 13 00:54:07.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.828461 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:54:07.839356 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 00:54:07.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.886000 audit: BPF prog-id=6 op=UNLOAD Aug 13 00:54:07.840574 systemd[1]: Stopped target slices.target. Aug 13 00:54:07.842277 systemd[1]: Stopped target sockets.target. Aug 13 00:54:07.843688 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:54:07.843763 systemd[1]: Closed iscsid.socket. Aug 13 00:54:07.851861 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:54:07.851928 systemd[1]: Closed iscsiuio.socket. Aug 13 00:54:07.853509 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:54:07.853576 systemd[1]: Stopped ignition-setup.service. Aug 13 00:54:07.856415 systemd[1]: Stopping systemd-networkd.service... Aug 13 00:54:07.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.857835 systemd[1]: Stopping systemd-resolved.service... Aug 13 00:54:07.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.863330 systemd-networkd[688]: eth1: DHCPv6 lease lost Aug 13 00:54:07.869701 systemd-networkd[688]: eth0: DHCPv6 lease lost Aug 13 00:54:07.921000 audit: BPF prog-id=9 op=UNLOAD Aug 13 00:54:07.881386 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:54:07.882319 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:54:07.882476 systemd[1]: Stopped systemd-resolved.service. Aug 13 00:54:07.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.884839 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:54:07.884993 systemd[1]: Stopped systemd-networkd.service. Aug 13 00:54:07.887235 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:54:07.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.887294 systemd[1]: Closed systemd-networkd.socket. Aug 13 00:54:07.892965 systemd[1]: Stopping network-cleanup.service... Aug 13 00:54:07.900098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:54:07.900243 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 00:54:07.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.903289 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:54:07.903387 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:54:07.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.929256 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:54:07.929383 systemd[1]: Stopped systemd-modules-load.service. Aug 13 00:54:07.953665 systemd[1]: Stopping systemd-udevd.service... Aug 13 00:54:07.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.971606 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:54:07.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.974391 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:54:07.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.974631 systemd[1]: Stopped systemd-udevd.service. Aug 13 00:54:07.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.980987 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:54:07.981946 systemd[1]: Stopped sysroot-boot.service. Aug 13 00:54:07.984498 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:54:08.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.984683 systemd[1]: Stopped network-cleanup.service. Aug 13 00:54:07.986767 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:54:07.986846 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 00:54:07.987704 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:54:07.987795 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 00:54:07.989574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:54:07.989689 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 00:54:07.991323 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:54:08.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.991419 systemd[1]: Stopped dracut-cmdline.service. Aug 13 00:54:08.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:07.996974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:54:07.997084 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 00:54:07.998686 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:54:07.998777 systemd[1]: Stopped initrd-setup-root.service. Aug 13 00:54:08.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:08.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:08.001209 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 00:54:08.002549 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:54:08.002682 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 00:54:08.012391 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:54:08.012531 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 00:54:08.022208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:54:08.022365 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 00:54:08.025466 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:54:08.026273 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:54:08.026423 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 00:54:08.027479 systemd[1]: Reached target initrd-switch-root.target. Aug 13 00:54:08.030363 systemd[1]: Starting initrd-switch-root.service... Aug 13 00:54:08.050126 systemd[1]: Switching root. Aug 13 00:54:08.070935 systemd-journald[184]: Journal stopped Aug 13 00:54:12.822092 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Aug 13 00:54:12.822237 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 00:54:12.822262 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 00:54:12.822288 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 00:54:12.822310 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:54:12.822333 kernel: SELinux: policy capability open_perms=1 Aug 13 00:54:12.822353 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:54:12.822369 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:54:12.822393 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:54:12.822413 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:54:12.822429 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:54:12.822445 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:54:12.822462 systemd[1]: Successfully loaded SELinux policy in 79.973ms. Aug 13 00:54:12.822488 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.498ms. Aug 13 00:54:12.822510 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 00:54:12.822527 systemd[1]: Detected virtualization kvm. Aug 13 00:54:12.822549 systemd[1]: Detected architecture x86-64. Aug 13 00:54:12.822567 systemd[1]: Detected first boot. Aug 13 00:54:12.822585 systemd[1]: Hostname set to . Aug 13 00:54:12.822604 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:54:12.822627 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 00:54:12.822649 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:54:12.822667 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:12.822687 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:12.822712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:12.822731 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:54:12.822748 systemd[1]: Stopped initrd-switch-root.service. Aug 13 00:54:12.822766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:54:12.822785 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 00:54:12.822804 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 00:54:12.822820 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Aug 13 00:54:12.822837 systemd[1]: Created slice system-getty.slice. Aug 13 00:54:12.822858 systemd[1]: Created slice system-modprobe.slice. Aug 13 00:54:12.822876 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 00:54:12.822896 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 00:54:12.822921 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 00:54:12.822941 systemd[1]: Created slice user.slice. Aug 13 00:54:12.822960 systemd[1]: Started systemd-ask-password-console.path. Aug 13 00:54:12.822977 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 00:54:12.822996 systemd[1]: Set up automount boot.automount. Aug 13 00:54:12.823019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 00:54:12.823037 systemd[1]: Stopped target initrd-switch-root.target. Aug 13 00:54:12.823055 systemd[1]: Stopped target initrd-fs.target. Aug 13 00:54:12.823074 systemd[1]: Stopped target initrd-root-fs.target. Aug 13 00:54:12.823091 systemd[1]: Reached target integritysetup.target. Aug 13 00:54:12.823110 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 00:54:12.823128 systemd[1]: Reached target remote-fs.target. Aug 13 00:54:12.823149 systemd[1]: Reached target slices.target. Aug 13 00:54:12.826283 systemd[1]: Reached target swap.target. Aug 13 00:54:12.826371 systemd[1]: Reached target torcx.target. Aug 13 00:54:12.826394 systemd[1]: Reached target veritysetup.target. Aug 13 00:54:12.826416 systemd[1]: Listening on systemd-coredump.socket. Aug 13 00:54:12.826437 systemd[1]: Listening on systemd-initctl.socket. Aug 13 00:54:12.826471 systemd[1]: Listening on systemd-networkd.socket. Aug 13 00:54:12.826491 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 00:54:12.826512 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 00:54:12.826533 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 00:54:12.826562 systemd[1]: Mounting dev-hugepages.mount... Aug 13 00:54:12.826582 systemd[1]: Mounting dev-mqueue.mount... Aug 13 00:54:12.826603 systemd[1]: Mounting media.mount... Aug 13 00:54:12.826626 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:12.826644 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 00:54:12.826663 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 00:54:12.826682 systemd[1]: Mounting tmp.mount... Aug 13 00:54:12.826700 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 00:54:12.826719 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:12.826741 systemd[1]: Starting kmod-static-nodes.service... Aug 13 00:54:12.826758 systemd[1]: Starting modprobe@configfs.service... Aug 13 00:54:12.826777 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:12.826796 systemd[1]: Starting modprobe@drm.service... Aug 13 00:54:12.826816 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:12.826834 systemd[1]: Starting modprobe@fuse.service... Aug 13 00:54:12.826853 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:12.826871 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:54:12.826888 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:54:12.826910 systemd[1]: Stopped systemd-fsck-root.service. Aug 13 00:54:12.826927 kernel: kauditd_printk_skb: 66 callbacks suppressed Aug 13 00:54:12.826950 kernel: audit: type=1131 audit(1755046452.617:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.826967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:54:12.826985 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:54:12.827004 kernel: audit: type=1131 audit(1755046452.631:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.827022 systemd[1]: Stopped systemd-journald.service. Aug 13 00:54:12.827039 systemd[1]: Starting systemd-journald.service... Aug 13 00:54:12.827063 kernel: audit: type=1130 audit(1755046452.646:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.827081 systemd[1]: Starting systemd-modules-load.service... Aug 13 00:54:12.827098 kernel: audit: type=1131 audit(1755046452.646:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.827114 systemd[1]: Starting systemd-network-generator.service... Aug 13 00:54:12.827132 kernel: audit: type=1334 audit(1755046452.648:112): prog-id=18 op=LOAD Aug 13 00:54:12.827152 systemd[1]: Starting systemd-remount-fs.service... Aug 13 00:54:12.827441 kernel: audit: type=1334 audit(1755046452.648:113): prog-id=19 op=LOAD Aug 13 00:54:12.827474 kernel: audit: type=1334 audit(1755046452.648:114): prog-id=20 op=LOAD Aug 13 00:54:12.827498 kernel: fuse: init (API version 7.34) Aug 13 00:54:12.827516 kernel: audit: type=1334 audit(1755046452.648:115): prog-id=16 op=UNLOAD Aug 13 00:54:12.827533 kernel: audit: type=1334 audit(1755046452.648:116): prog-id=17 op=UNLOAD Aug 13 00:54:12.827552 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 00:54:12.827573 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:54:12.827594 systemd[1]: Stopped verity-setup.service. Aug 13 00:54:12.827613 kernel: audit: type=1131 audit(1755046452.702:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.827806 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:12.827834 kernel: loop: module loaded Aug 13 00:54:12.827858 systemd[1]: Mounted dev-hugepages.mount. Aug 13 00:54:12.827878 systemd[1]: Mounted dev-mqueue.mount. Aug 13 00:54:12.827898 systemd[1]: Mounted media.mount. Aug 13 00:54:12.827915 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 00:54:12.827934 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 00:54:12.827952 systemd[1]: Mounted tmp.mount. Aug 13 00:54:12.832161 systemd[1]: Finished kmod-static-nodes.service. Aug 13 00:54:12.832222 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:54:12.832244 systemd[1]: Finished modprobe@configfs.service. Aug 13 00:54:12.832276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:12.832295 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:12.832314 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:54:12.832370 systemd[1]: Finished modprobe@drm.service. Aug 13 00:54:12.832391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:12.832416 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:12.832434 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:54:12.832452 systemd[1]: Finished modprobe@fuse.service. Aug 13 00:54:12.832477 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:12.832503 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:12.832522 systemd[1]: Finished systemd-modules-load.service. Aug 13 00:54:12.832543 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 00:54:12.832562 systemd[1]: Finished systemd-network-generator.service. Aug 13 00:54:12.832582 systemd[1]: Finished systemd-remount-fs.service. Aug 13 00:54:12.832602 systemd[1]: Reached target network-pre.target. Aug 13 00:54:12.832621 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 00:54:12.832638 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 00:54:12.832658 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:54:12.832678 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 00:54:12.832719 systemd-journald[968]: Journal started Aug 13 00:54:12.832825 systemd-journald[968]: Runtime Journal (/run/log/journal/882e35e7625e4855a2ad0c2361991e74) is 4.9M, max 39.5M, 34.5M free. Aug 13 00:54:08.301000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:54:08.376000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:54:08.376000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 00:54:08.376000 audit: BPF prog-id=10 op=LOAD Aug 13 00:54:08.376000 audit: BPF prog-id=10 op=UNLOAD Aug 13 00:54:08.376000 audit: BPF prog-id=11 op=LOAD Aug 13 00:54:08.376000 audit: BPF prog-id=11 op=UNLOAD Aug 13 00:54:08.516000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 00:54:08.516000 audit[886]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:08.516000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:54:08.518000 audit[886]: AVC avc: denied { associate } for pid=886 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Aug 13 00:54:08.518000 audit[886]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=869 pid=886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:08.518000 audit: CWD cwd="/" Aug 13 00:54:08.518000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:08.518000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:08.518000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Aug 13 00:54:12.438000 audit: BPF prog-id=12 op=LOAD Aug 13 00:54:12.438000 audit: BPF prog-id=3 op=UNLOAD Aug 13 00:54:12.438000 audit: BPF prog-id=13 op=LOAD Aug 13 00:54:12.438000 audit: BPF prog-id=14 op=LOAD Aug 13 00:54:12.438000 audit: BPF prog-id=4 op=UNLOAD Aug 13 00:54:12.438000 audit: BPF prog-id=5 op=UNLOAD Aug 13 00:54:12.440000 audit: BPF prog-id=15 op=LOAD Aug 13 00:54:12.440000 audit: BPF prog-id=12 op=UNLOAD Aug 13 00:54:12.440000 audit: BPF prog-id=16 op=LOAD Aug 13 00:54:12.440000 audit: BPF prog-id=17 op=LOAD Aug 13 00:54:12.440000 audit: BPF prog-id=13 op=UNLOAD Aug 13 00:54:12.440000 audit: BPF prog-id=14 op=UNLOAD Aug 13 00:54:12.840129 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:12.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.453000 audit: BPF prog-id=15 op=UNLOAD Aug 13 00:54:12.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.648000 audit: BPF prog-id=18 op=LOAD Aug 13 00:54:12.648000 audit: BPF prog-id=19 op=LOAD Aug 13 00:54:12.648000 audit: BPF prog-id=20 op=LOAD Aug 13 00:54:12.648000 audit: BPF prog-id=16 op=UNLOAD Aug 13 00:54:12.648000 audit: BPF prog-id=17 op=UNLOAD Aug 13 00:54:12.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.816000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 00:54:12.816000 audit[968]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffef39b5e40 a2=4000 a3=7ffef39b5edc items=0 ppid=1 pid=968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:12.816000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 00:54:08.509794 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:12.851437 systemd[1]: Starting systemd-random-seed.service... Aug 13 00:54:12.851502 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:12.435653 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:54:12.855356 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:54:12.891370 systemd[1]: Starting systemd-sysusers.service... Aug 13 00:54:12.891402 systemd[1]: Started systemd-journald.service. Aug 13 00:54:12.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:08.511449 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:54:12.435678 systemd[1]: Unnecessary job was removed for dev-vda6.device. Aug 13 00:54:08.511542 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:54:12.441865 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:54:08.511611 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Aug 13 00:54:12.873772 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 00:54:08.511628 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="skipped missing lower profile" missing profile=oem Aug 13 00:54:12.874752 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 00:54:08.511705 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Aug 13 00:54:12.876681 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 00:54:08.511733 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Aug 13 00:54:12.881230 systemd[1]: Starting systemd-journal-flush.service... Aug 13 00:54:08.512128 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Aug 13 00:54:12.886614 systemd[1]: Starting systemd-udev-settle.service... Aug 13 00:54:08.512269 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Aug 13 00:54:08.512299 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Aug 13 00:54:08.514533 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Aug 13 00:54:08.514601 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Aug 13 00:54:08.514680 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Aug 13 00:54:08.514709 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Aug 13 00:54:08.514744 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Aug 13 00:54:08.514767 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Aug 13 00:54:11.837499 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:11.838050 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:11.838349 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:11.839243 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Aug 13 00:54:11.839324 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Aug 13 00:54:11.839435 /usr/lib/systemd/system-generators/torcx-generator[886]: time="2025-08-13T00:54:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Aug 13 00:54:12.904824 systemd[1]: Finished systemd-random-seed.service. Aug 13 00:54:12.905785 systemd[1]: Reached target first-boot-complete.target. Aug 13 00:54:12.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.910950 systemd-journald[968]: Time spent on flushing to /var/log/journal/882e35e7625e4855a2ad0c2361991e74 is 45.387ms for 1170 entries. Aug 13 00:54:12.910950 systemd-journald[968]: System Journal (/var/log/journal/882e35e7625e4855a2ad0c2361991e74) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:54:12.984198 systemd-journald[968]: Received client request to flush runtime journal. Aug 13 00:54:12.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.985732 udevadm[995]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:54:12.935275 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:54:12.955932 systemd[1]: Finished systemd-sysusers.service. Aug 13 00:54:12.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:12.959553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 00:54:12.985870 systemd[1]: Finished systemd-journal-flush.service. Aug 13 00:54:13.012426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 00:54:13.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.660749 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 00:54:13.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.662000 audit: BPF prog-id=21 op=LOAD Aug 13 00:54:13.662000 audit: BPF prog-id=22 op=LOAD Aug 13 00:54:13.662000 audit: BPF prog-id=7 op=UNLOAD Aug 13 00:54:13.662000 audit: BPF prog-id=8 op=UNLOAD Aug 13 00:54:13.664425 systemd[1]: Starting systemd-udevd.service... Aug 13 00:54:13.691684 systemd-udevd[1000]: Using default interface naming scheme 'v252'. Aug 13 00:54:13.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.722000 audit: BPF prog-id=23 op=LOAD Aug 13 00:54:13.720443 systemd[1]: Started systemd-udevd.service. Aug 13 00:54:13.726244 systemd[1]: Starting systemd-networkd.service... Aug 13 00:54:13.735000 audit: BPF prog-id=24 op=LOAD Aug 13 00:54:13.735000 audit: BPF prog-id=25 op=LOAD Aug 13 00:54:13.735000 audit: BPF prog-id=26 op=LOAD Aug 13 00:54:13.736985 systemd[1]: Starting systemd-userdbd.service... Aug 13 00:54:13.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.796280 systemd[1]: Started systemd-userdbd.service. Aug 13 00:54:13.811691 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Aug 13 00:54:13.879751 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:13.879946 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:13.881590 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:13.883981 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:13.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.888515 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:13.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.889337 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:54:13.889450 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:54:13.889602 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:13.890423 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:13.890716 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:13.894603 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:13.894790 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:13.895516 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:13.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.898575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:13.898791 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:13.899970 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:13.942389 systemd-networkd[1008]: lo: Link UP Aug 13 00:54:13.942402 systemd-networkd[1008]: lo: Gained carrier Aug 13 00:54:13.943043 systemd-networkd[1008]: Enumeration completed Aug 13 00:54:13.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:13.943152 systemd-networkd[1008]: eth1: Configuring with /run/systemd/network/10-72:2c:7a:4e:45:b0.network. Aug 13 00:54:13.943242 systemd[1]: Started systemd-networkd.service. Aug 13 00:54:13.945273 systemd-networkd[1008]: eth0: Configuring with /run/systemd/network/10-5a:7f:b6:72:e2:01.network. Aug 13 00:54:13.947150 systemd-networkd[1008]: eth1: Link UP Aug 13 00:54:13.947165 systemd-networkd[1008]: eth1: Gained carrier Aug 13 00:54:13.952611 systemd-networkd[1008]: eth0: Link UP Aug 13 00:54:13.952624 systemd-networkd[1008]: eth0: Gained carrier Aug 13 00:54:13.981208 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 00:54:14.002223 kernel: ACPI: button: Power Button [PWRF] Aug 13 00:54:14.010105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 00:54:14.039000 audit[1002]: AVC avc: denied { confidentiality } for pid=1002 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 00:54:14.039000 audit[1002]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559238f66660 a1=338ac a2=7fb0eb7f8bc5 a3=5 items=110 ppid=1000 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:14.039000 audit: CWD cwd="/" Aug 13 00:54:14.039000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=1 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=2 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=3 name=(null) inode=13242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=4 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=5 name=(null) inode=13243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=6 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=7 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=8 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=9 name=(null) inode=13245 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=10 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=11 name=(null) inode=13246 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=12 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=13 name=(null) inode=13247 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=14 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=15 name=(null) inode=13248 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=16 name=(null) inode=13244 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=17 name=(null) inode=13249 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=18 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=19 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=20 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=21 name=(null) inode=13251 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=22 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=23 name=(null) inode=13252 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=24 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=25 name=(null) inode=13253 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=26 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=27 name=(null) inode=13254 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=28 name=(null) inode=13250 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=29 name=(null) inode=13255 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=30 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=31 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=32 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=33 name=(null) inode=13257 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=34 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=35 name=(null) inode=13258 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=36 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=37 name=(null) inode=13259 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=38 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=39 name=(null) inode=13260 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=40 name=(null) inode=13256 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=41 name=(null) inode=13261 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=42 name=(null) inode=13241 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=43 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=44 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=45 name=(null) inode=13263 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=46 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=47 name=(null) inode=13264 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=48 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=49 name=(null) inode=13265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=50 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=51 name=(null) inode=13266 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=52 name=(null) inode=13262 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=53 name=(null) inode=13267 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=55 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=56 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=57 name=(null) inode=13269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=58 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=59 name=(null) inode=13270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=60 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=61 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=62 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=63 name=(null) inode=13272 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=64 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=65 name=(null) inode=13273 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=66 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=67 name=(null) inode=13274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=68 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=69 name=(null) inode=13275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=70 name=(null) inode=13271 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=71 name=(null) inode=13276 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=72 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=73 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=74 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=75 name=(null) inode=13278 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=76 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=77 name=(null) inode=13279 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=78 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=79 name=(null) inode=13280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=80 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=81 name=(null) inode=13281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=82 name=(null) inode=13277 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=83 name=(null) inode=13282 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=84 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=85 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=86 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=87 name=(null) inode=13284 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=88 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=89 name=(null) inode=13285 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=90 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=91 name=(null) inode=13286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=92 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=93 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=94 name=(null) inode=13283 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=95 name=(null) inode=13288 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=96 name=(null) inode=13268 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=97 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=98 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=99 name=(null) inode=13290 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=100 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=101 name=(null) inode=13291 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=102 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=103 name=(null) inode=13292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=104 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=105 name=(null) inode=13293 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=106 name=(null) inode=13289 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=107 name=(null) inode=13294 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PATH item=109 name=(null) inode=13295 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 00:54:14.039000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 00:54:14.074310 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Aug 13 00:54:14.097210 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 00:54:14.108207 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:54:14.237207 kernel: EDAC MC: Ver: 3.0.0 Aug 13 00:54:14.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.261099 systemd[1]: Finished systemd-udev-settle.service. Aug 13 00:54:14.263697 systemd[1]: Starting lvm2-activation-early.service... Aug 13 00:54:14.285365 lvm[1038]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:54:14.313040 systemd[1]: Finished lvm2-activation-early.service. Aug 13 00:54:14.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.313893 systemd[1]: Reached target cryptsetup.target. Aug 13 00:54:14.316223 systemd[1]: Starting lvm2-activation.service... Aug 13 00:54:14.322283 lvm[1039]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:54:14.345660 systemd[1]: Finished lvm2-activation.service. Aug 13 00:54:14.346831 systemd[1]: Reached target local-fs-pre.target. Aug 13 00:54:14.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.349768 systemd[1]: Mounting media-configdrive.mount... Aug 13 00:54:14.350488 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:54:14.350552 systemd[1]: Reached target machines.target. Aug 13 00:54:14.353243 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 00:54:14.369265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 00:54:14.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.376191 kernel: ISO 9660 Extensions: RRIP_1991A Aug 13 00:54:14.377827 systemd[1]: Mounted media-configdrive.mount. Aug 13 00:54:14.378820 systemd[1]: Reached target local-fs.target. Aug 13 00:54:14.381001 systemd[1]: Starting ldconfig.service... Aug 13 00:54:14.382253 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:14.382364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:14.385127 systemd[1]: Starting systemd-boot-update.service... Aug 13 00:54:14.394638 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 00:54:14.398980 systemd[1]: Starting systemd-sysext.service... Aug 13 00:54:14.412118 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Aug 13 00:54:14.414527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 00:54:14.425981 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 00:54:14.432689 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 00:54:14.433009 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 00:54:14.459544 kernel: loop0: detected capacity change from 0 to 224512 Aug 13 00:54:14.554597 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:54:14.557449 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 00:54:14.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.572505 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:54:14.595212 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 00:54:14.601809 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) Aug 13 00:54:14.601809 systemd-fsck[1054]: /dev/vda1: 789 files, 119324/258078 clusters Aug 13 00:54:14.609883 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 00:54:14.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.612866 systemd[1]: Mounting boot.mount... Aug 13 00:54:14.626320 (sd-sysext)[1057]: Using extensions 'kubernetes'. Aug 13 00:54:14.627608 (sd-sysext)[1057]: Merged extensions into '/usr'. Aug 13 00:54:14.657875 systemd[1]: Mounted boot.mount. Aug 13 00:54:14.667467 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:14.669673 systemd[1]: Mounting usr-share-oem.mount... Aug 13 00:54:14.670638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:14.672667 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:14.677495 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:14.683600 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:14.684375 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:14.684601 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:14.684821 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:14.691084 systemd[1]: Mounted usr-share-oem.mount. Aug 13 00:54:14.692831 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:14.693047 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:14.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.694994 systemd[1]: Finished systemd-sysext.service. Aug 13 00:54:14.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.701008 systemd[1]: Starting ensure-sysext.service... Aug 13 00:54:14.705490 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 00:54:14.709375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:14.709651 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:14.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.711256 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:14.711461 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:14.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:14.713635 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:14.713713 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:14.720789 systemd[1]: Reloading. Aug 13 00:54:14.739674 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 00:54:14.745967 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:54:14.751284 systemd-tmpfiles[1065]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:54:14.910804 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:54:14.939420 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-08-13T00:54:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:14.939469 /usr/lib/systemd/system-generators/torcx-generator[1084]: time="2025-08-13T00:54:14Z" level=info msg="torcx already run" Aug 13 00:54:15.079407 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:15.079444 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:15.102815 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:15.195000 audit: BPF prog-id=27 op=LOAD Aug 13 00:54:15.195000 audit: BPF prog-id=24 op=UNLOAD Aug 13 00:54:15.195000 audit: BPF prog-id=28 op=LOAD Aug 13 00:54:15.195000 audit: BPF prog-id=29 op=LOAD Aug 13 00:54:15.195000 audit: BPF prog-id=25 op=UNLOAD Aug 13 00:54:15.195000 audit: BPF prog-id=26 op=UNLOAD Aug 13 00:54:15.197000 audit: BPF prog-id=30 op=LOAD Aug 13 00:54:15.197000 audit: BPF prog-id=18 op=UNLOAD Aug 13 00:54:15.197000 audit: BPF prog-id=31 op=LOAD Aug 13 00:54:15.198000 audit: BPF prog-id=32 op=LOAD Aug 13 00:54:15.198000 audit: BPF prog-id=19 op=UNLOAD Aug 13 00:54:15.198000 audit: BPF prog-id=20 op=UNLOAD Aug 13 00:54:15.201000 audit: BPF prog-id=33 op=LOAD Aug 13 00:54:15.201000 audit: BPF prog-id=23 op=UNLOAD Aug 13 00:54:15.204000 audit: BPF prog-id=34 op=LOAD Aug 13 00:54:15.204000 audit: BPF prog-id=35 op=LOAD Aug 13 00:54:15.204000 audit: BPF prog-id=21 op=UNLOAD Aug 13 00:54:15.204000 audit: BPF prog-id=22 op=UNLOAD Aug 13 00:54:15.213942 systemd[1]: Finished ldconfig.service. Aug 13 00:54:15.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.215972 systemd[1]: Finished systemd-boot-update.service. Aug 13 00:54:15.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.219931 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 00:54:15.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.227097 systemd[1]: Starting audit-rules.service... Aug 13 00:54:15.237959 systemd[1]: Starting clean-ca-certificates.service... Aug 13 00:54:15.242565 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 00:54:15.247000 audit: BPF prog-id=36 op=LOAD Aug 13 00:54:15.248974 systemd[1]: Starting systemd-resolved.service... Aug 13 00:54:15.250000 audit: BPF prog-id=37 op=LOAD Aug 13 00:54:15.254566 systemd[1]: Starting systemd-timesyncd.service... Aug 13 00:54:15.257474 systemd[1]: Starting systemd-update-utmp.service... Aug 13 00:54:15.271063 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.273617 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:15.279530 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:15.283730 systemd[1]: Starting modprobe@loop.service... Aug 13 00:54:15.284926 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.285220 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:15.287018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:15.288847 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:15.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.295451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:15.295693 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:15.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.297130 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.301343 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 00:54:15.302243 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.302569 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:15.302868 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:15.310902 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.315134 systemd[1]: Starting modprobe@drm.service... Aug 13 00:54:15.319165 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 00:54:15.320537 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.320902 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:15.325597 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 00:54:15.330047 systemd[1]: Finished clean-ca-certificates.service. Aug 13 00:54:15.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.331592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:54:15.332375 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 00:54:15.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.335541 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:54:15.338958 systemd[1]: Finished ensure-sysext.service. Aug 13 00:54:15.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.344000 audit[1141]: SYSTEM_BOOT pid=1141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.348545 systemd[1]: Finished systemd-update-utmp.service. Aug 13 00:54:15.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.350983 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:54:15.351218 systemd[1]: Finished modprobe@loop.service. Aug 13 00:54:15.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.352121 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 00:54:15.359730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:54:15.359984 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 00:54:15.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.361041 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:54:15.363055 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 00:54:15.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.365622 systemd[1]: Starting systemd-update-done.service... Aug 13 00:54:15.372513 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:54:15.372727 systemd[1]: Finished modprobe@drm.service. Aug 13 00:54:15.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.378979 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:15.379017 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:54:15.393742 systemd[1]: Finished systemd-update-done.service. Aug 13 00:54:15.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 00:54:15.415000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 00:54:15.415000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd47c627d0 a2=420 a3=0 items=0 ppid=1133 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 00:54:15.415000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 00:54:15.416849 augenrules[1161]: No rules Aug 13 00:54:15.416997 systemd[1]: Finished audit-rules.service. Aug 13 00:54:15.455712 systemd[1]: Started systemd-timesyncd.service. Aug 13 00:54:15.456613 systemd[1]: Reached target time-set.target. Aug 13 00:54:15.457467 systemd-resolved[1139]: Positive Trust Anchors: Aug 13 00:54:15.457951 systemd-resolved[1139]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:54:15.458004 systemd-resolved[1139]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 00:54:15.467544 systemd-resolved[1139]: Using system hostname 'ci-3510.3.8-f-585a890caa'. Aug 13 00:54:15.471349 systemd[1]: Started systemd-resolved.service. Aug 13 00:54:15.472239 systemd[1]: Reached target network.target. Aug 13 00:54:15.472893 systemd[1]: Reached target nss-lookup.target. Aug 13 00:54:15.473555 systemd[1]: Reached target sysinit.target. Aug 13 00:54:15.474501 systemd[1]: Started motdgen.path. Aug 13 00:54:15.475129 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 00:54:15.476220 systemd[1]: Started logrotate.timer. Aug 13 00:54:15.477015 systemd[1]: Started mdadm.timer. Aug 13 00:54:15.477599 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 00:54:15.478355 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:54:15.478407 systemd[1]: Reached target paths.target. Aug 13 00:54:15.478996 systemd[1]: Reached target timers.target. Aug 13 00:54:15.480683 systemd[1]: Listening on dbus.socket. Aug 13 00:54:15.483498 systemd[1]: Starting docker.socket... Aug 13 00:54:15.492247 systemd[1]: Listening on sshd.socket. Aug 13 00:54:16.097639 systemd-resolved[1139]: Clock change detected. Flushing caches. Aug 13 00:54:16.097903 systemd-timesyncd[1140]: Contacted time server 23.186.168.127:123 (0.flatcar.pool.ntp.org). Aug 13 00:54:16.098429 systemd-timesyncd[1140]: Initial clock synchronization to Wed 2025-08-13 00:54:16.097489 UTC. Aug 13 00:54:16.099835 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:16.103263 systemd[1]: Listening on docker.socket. Aug 13 00:54:16.104770 systemd[1]: Reached target sockets.target. Aug 13 00:54:16.105808 systemd[1]: Reached target basic.target. Aug 13 00:54:16.106784 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:54:16.107092 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 00:54:16.109337 systemd[1]: Starting containerd.service... Aug 13 00:54:16.113214 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Aug 13 00:54:16.116599 systemd[1]: Starting dbus.service... Aug 13 00:54:16.121757 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 00:54:16.129427 systemd[1]: Starting extend-filesystems.service... Aug 13 00:54:16.131092 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 00:54:16.136542 systemd[1]: Starting motdgen.service... Aug 13 00:54:16.205591 dbus-daemon[1171]: [system] SELinux support is enabled Aug 13 00:54:16.208723 jq[1174]: false Aug 13 00:54:16.140535 systemd[1]: Starting prepare-helm.service... Aug 13 00:54:16.146445 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 00:54:16.151166 systemd-networkd[1008]: eth0: Gained IPv6LL Aug 13 00:54:16.154386 systemd[1]: Starting sshd-keygen.service... Aug 13 00:54:16.164567 systemd[1]: Starting systemd-logind.service... Aug 13 00:54:16.165601 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 00:54:16.215919 jq[1184]: true Aug 13 00:54:16.165887 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:54:16.168686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:54:16.170580 systemd[1]: Starting update-engine.service... Aug 13 00:54:16.176218 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 00:54:16.182764 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 00:54:16.186764 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:54:16.187393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 00:54:16.192435 systemd[1]: Reached target network-online.target. Aug 13 00:54:16.198436 systemd[1]: Starting kubelet.service... Aug 13 00:54:16.205891 systemd[1]: Started dbus.service. Aug 13 00:54:16.215378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:54:16.215700 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 00:54:16.216928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:54:16.217994 systemd[1]: Reached target system-config.target. Aug 13 00:54:16.219340 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:54:16.219425 systemd[1]: Reached target user-config.target. Aug 13 00:54:16.231216 extend-filesystems[1175]: Found loop1 Aug 13 00:54:16.248873 jq[1194]: true Aug 13 00:54:16.263010 extend-filesystems[1175]: Found vda Aug 13 00:54:16.263010 extend-filesystems[1175]: Found vda1 Aug 13 00:54:16.263010 extend-filesystems[1175]: Found vda2 Aug 13 00:54:16.266083 extend-filesystems[1175]: Found vda3 Aug 13 00:54:16.266083 extend-filesystems[1175]: Found usr Aug 13 00:54:16.266083 extend-filesystems[1175]: Found vda4 Aug 13 00:54:16.266083 extend-filesystems[1175]: Found vda6 Aug 13 00:54:16.266083 extend-filesystems[1175]: Found vda7 Aug 13 00:54:16.266083 extend-filesystems[1175]: Found vda9 Aug 13 00:54:16.266083 extend-filesystems[1175]: Checking size of /dev/vda9 Aug 13 00:54:16.317221 tar[1187]: linux-amd64/LICENSE Aug 13 00:54:16.317221 tar[1187]: linux-amd64/helm Aug 13 00:54:16.350663 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:54:16.350922 systemd[1]: Finished motdgen.service. Aug 13 00:54:16.365265 extend-filesystems[1175]: Resized partition /dev/vda9 Aug 13 00:54:16.395543 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Aug 13 00:54:16.403004 update_engine[1183]: I0813 00:54:16.402218 1183 main.cc:92] Flatcar Update Engine starting Aug 13 00:54:16.404189 systemd-networkd[1008]: eth1: Gained IPv6LL Aug 13 00:54:16.411511 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Aug 13 00:54:16.417468 bash[1224]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:54:16.418113 systemd[1]: Started update-engine.service. Aug 13 00:54:16.424688 update_engine[1183]: I0813 00:54:16.418222 1183 update_check_scheduler.cc:74] Next update check in 7m33s Aug 13 00:54:16.422485 systemd[1]: Started locksmithd.service. Aug 13 00:54:16.428890 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 00:54:16.481017 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 00:54:16.490273 coreos-metadata[1170]: Aug 13 00:54:16.490 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:54:16.494845 extend-filesystems[1225]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:54:16.494845 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 00:54:16.494845 extend-filesystems[1225]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 00:54:16.499550 extend-filesystems[1175]: Resized filesystem in /dev/vda9 Aug 13 00:54:16.499550 extend-filesystems[1175]: Found vdb Aug 13 00:54:16.496197 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:54:16.496451 systemd[1]: Finished extend-filesystems.service. Aug 13 00:54:16.509520 coreos-metadata[1170]: Aug 13 00:54:16.507 INFO Fetch successful Aug 13 00:54:16.515239 unknown[1170]: wrote ssh authorized keys file for user: core Aug 13 00:54:16.534061 update-ssh-keys[1229]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:54:16.537277 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Aug 13 00:54:16.580478 env[1190]: time="2025-08-13T00:54:16.580329796Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 00:54:16.623025 systemd-logind[1182]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 00:54:16.623704 systemd-logind[1182]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 00:54:16.628448 systemd-logind[1182]: New seat seat0. Aug 13 00:54:16.634036 systemd[1]: Started systemd-logind.service. Aug 13 00:54:16.688723 env[1190]: time="2025-08-13T00:54:16.687839118Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:54:16.688723 env[1190]: time="2025-08-13T00:54:16.688090882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.692676 env[1190]: time="2025-08-13T00:54:16.692581221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:16.692676 env[1190]: time="2025-08-13T00:54:16.692662185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.701142 env[1190]: time="2025-08-13T00:54:16.700014514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:16.701142 env[1190]: time="2025-08-13T00:54:16.700091120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.701142 env[1190]: time="2025-08-13T00:54:16.700121355Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 00:54:16.701142 env[1190]: time="2025-08-13T00:54:16.700141097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.704645 env[1190]: time="2025-08-13T00:54:16.704132255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.704645 env[1190]: time="2025-08-13T00:54:16.704554996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:54:16.704884 env[1190]: time="2025-08-13T00:54:16.704789455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:54:16.704884 env[1190]: time="2025-08-13T00:54:16.704809146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:54:16.705044 env[1190]: time="2025-08-13T00:54:16.704877895Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 00:54:16.705044 env[1190]: time="2025-08-13T00:54:16.704901550Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719355350Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719428412Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719446578Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719498820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719516931Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719534953Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719555633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719576470Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719594833Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719630615Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719649669Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719663391Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:54:16.719913 env[1190]: time="2025-08-13T00:54:16.719863300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.719976904Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720280182Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720308456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720323303Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720386824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720407088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720424648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720437868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720450785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720463748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720475227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720490165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.720525 env[1190]: time="2025-08-13T00:54:16.720513027Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720691159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720724985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720745612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720759379Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720780061Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720794143Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720817579Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 00:54:16.721006 env[1190]: time="2025-08-13T00:54:16.720864994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:54:16.721315 env[1190]: time="2025-08-13T00:54:16.721140368Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:54:16.721315 env[1190]: time="2025-08-13T00:54:16.721201154Z" level=info msg="Connect containerd service" Aug 13 00:54:16.721315 env[1190]: time="2025-08-13T00:54:16.721252913Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:54:16.728612 env[1190]: time="2025-08-13T00:54:16.728476145Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729201298Z" level=info msg="Start subscribing containerd event" Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729289414Z" level=info msg="Start recovering state" Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729297916Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729360484Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729396269Z" level=info msg="Start event monitor" Aug 13 00:54:16.730826 env[1190]: time="2025-08-13T00:54:16.729436402Z" level=info msg="containerd successfully booted in 0.173994s" Aug 13 00:54:16.729583 systemd[1]: Started containerd.service. Aug 13 00:54:16.732728 env[1190]: time="2025-08-13T00:54:16.731830753Z" level=info msg="Start snapshots syncer" Aug 13 00:54:16.732728 env[1190]: time="2025-08-13T00:54:16.731904809Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:54:16.732728 env[1190]: time="2025-08-13T00:54:16.731914629Z" level=info msg="Start streaming server" Aug 13 00:54:17.054744 locksmithd[1226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:54:17.419049 systemd[1]: Created slice system-sshd.slice. Aug 13 00:54:17.707208 tar[1187]: linux-amd64/README.md Aug 13 00:54:17.714132 systemd[1]: Finished prepare-helm.service. Aug 13 00:54:17.768073 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:54:17.808124 systemd[1]: Finished sshd-keygen.service. Aug 13 00:54:17.810763 systemd[1]: Starting issuegen.service... Aug 13 00:54:17.813520 systemd[1]: Started sshd@0-143.198.229.35:22-139.178.68.195:40974.service. Aug 13 00:54:17.832179 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:54:17.832405 systemd[1]: Finished issuegen.service. Aug 13 00:54:17.835501 systemd[1]: Starting systemd-user-sessions.service... Aug 13 00:54:17.851097 systemd[1]: Finished systemd-user-sessions.service. Aug 13 00:54:17.855315 systemd[1]: Started getty@tty1.service. Aug 13 00:54:17.862230 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 00:54:17.863656 systemd[1]: Reached target getty.target. Aug 13 00:54:17.910421 sshd[1249]: Accepted publickey for core from 139.178.68.195 port 40974 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:17.913168 sshd[1249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:17.935594 systemd[1]: Created slice user-500.slice. Aug 13 00:54:17.942178 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 00:54:17.949196 systemd-logind[1182]: New session 1 of user core. Aug 13 00:54:17.967489 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 00:54:17.973517 systemd[1]: Starting user@500.service... Aug 13 00:54:17.978535 (systemd)[1258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:18.092796 systemd[1258]: Queued start job for default target default.target. Aug 13 00:54:18.094688 systemd[1258]: Reached target paths.target. Aug 13 00:54:18.095374 systemd[1258]: Reached target sockets.target. Aug 13 00:54:18.095551 systemd[1258]: Reached target timers.target. Aug 13 00:54:18.095680 systemd[1258]: Reached target basic.target. Aug 13 00:54:18.096028 systemd[1]: Started user@500.service. Aug 13 00:54:18.097093 systemd[1258]: Reached target default.target. Aug 13 00:54:18.097167 systemd[1258]: Startup finished in 106ms. Aug 13 00:54:18.098055 systemd[1]: Started session-1.scope. Aug 13 00:54:18.179116 systemd[1]: Started sshd@1-143.198.229.35:22-139.178.68.195:40980.service. Aug 13 00:54:18.273030 sshd[1267]: Accepted publickey for core from 139.178.68.195 port 40980 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:18.274516 sshd[1267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:18.287736 systemd-logind[1182]: New session 2 of user core. Aug 13 00:54:18.288743 systemd[1]: Started session-2.scope. Aug 13 00:54:18.399450 sshd[1267]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:18.409780 systemd[1]: Started sshd@2-143.198.229.35:22-139.178.68.195:40994.service. Aug 13 00:54:18.413417 systemd[1]: sshd@1-143.198.229.35:22-139.178.68.195:40980.service: Deactivated successfully. Aug 13 00:54:18.416618 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:54:18.419252 systemd-logind[1182]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:54:18.423092 systemd-logind[1182]: Removed session 2. Aug 13 00:54:18.425608 systemd[1]: Started kubelet.service. Aug 13 00:54:18.427345 systemd[1]: Reached target multi-user.target. Aug 13 00:54:18.431477 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 00:54:18.448818 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 00:54:18.449172 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 00:54:18.450293 systemd[1]: Startup finished in 1.217s (kernel) + 6.306s (initrd) + 9.643s (userspace) = 17.167s. Aug 13 00:54:18.497677 sshd[1273]: Accepted publickey for core from 139.178.68.195 port 40994 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:18.501542 sshd[1273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:18.514778 systemd[1]: Started session-3.scope. Aug 13 00:54:18.515877 systemd-logind[1182]: New session 3 of user core. Aug 13 00:54:18.594631 sshd[1273]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:18.601257 systemd[1]: sshd@2-143.198.229.35:22-139.178.68.195:40994.service: Deactivated successfully. Aug 13 00:54:18.602431 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:54:18.604434 systemd-logind[1182]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:54:18.607011 systemd-logind[1182]: Removed session 3. Aug 13 00:54:19.327692 kubelet[1276]: E0813 00:54:19.327611 1276 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:19.330626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:19.330896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:19.331365 systemd[1]: kubelet.service: Consumed 1.716s CPU time. Aug 13 00:54:28.606320 systemd[1]: Started sshd@3-143.198.229.35:22-139.178.68.195:54856.service. Aug 13 00:54:28.672177 sshd[1288]: Accepted publickey for core from 139.178.68.195 port 54856 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:28.675735 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:28.685243 systemd[1]: Started session-4.scope. Aug 13 00:54:28.686204 systemd-logind[1182]: New session 4 of user core. Aug 13 00:54:28.760987 sshd[1288]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:28.768910 systemd[1]: Started sshd@4-143.198.229.35:22-139.178.68.195:54862.service. Aug 13 00:54:28.770943 systemd[1]: sshd@3-143.198.229.35:22-139.178.68.195:54856.service: Deactivated successfully. Aug 13 00:54:28.772420 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:54:28.773758 systemd-logind[1182]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:54:28.777357 systemd-logind[1182]: Removed session 4. Aug 13 00:54:28.829495 sshd[1293]: Accepted publickey for core from 139.178.68.195 port 54862 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:28.836622 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:28.847457 systemd[1]: Started session-5.scope. Aug 13 00:54:28.848382 systemd-logind[1182]: New session 5 of user core. Aug 13 00:54:28.921779 sshd[1293]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:28.929306 systemd[1]: sshd@4-143.198.229.35:22-139.178.68.195:54862.service: Deactivated successfully. Aug 13 00:54:28.930420 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:54:28.931566 systemd-logind[1182]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:54:28.934230 systemd[1]: Started sshd@5-143.198.229.35:22-139.178.68.195:54866.service. Aug 13 00:54:28.936633 systemd-logind[1182]: Removed session 5. Aug 13 00:54:29.006738 sshd[1300]: Accepted publickey for core from 139.178.68.195 port 54866 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:29.009174 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:29.018119 systemd[1]: Started session-6.scope. Aug 13 00:54:29.018938 systemd-logind[1182]: New session 6 of user core. Aug 13 00:54:29.095384 sshd[1300]: pam_unix(sshd:session): session closed for user core Aug 13 00:54:29.103899 systemd[1]: Started sshd@6-143.198.229.35:22-139.178.68.195:54876.service. Aug 13 00:54:29.105048 systemd[1]: sshd@5-143.198.229.35:22-139.178.68.195:54866.service: Deactivated successfully. Aug 13 00:54:29.106530 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:54:29.109156 systemd-logind[1182]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:54:29.110742 systemd-logind[1182]: Removed session 6. Aug 13 00:54:29.184757 sshd[1305]: Accepted publickey for core from 139.178.68.195 port 54876 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:54:29.187243 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:54:29.196111 systemd-logind[1182]: New session 7 of user core. Aug 13 00:54:29.197128 systemd[1]: Started session-7.scope. Aug 13 00:54:29.287119 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:54:29.288292 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 00:54:29.329987 systemd[1]: Starting docker.service... Aug 13 00:54:29.332428 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:54:29.332663 systemd[1]: Stopped kubelet.service. Aug 13 00:54:29.332743 systemd[1]: kubelet.service: Consumed 1.716s CPU time. Aug 13 00:54:29.337660 systemd[1]: Starting kubelet.service... Aug 13 00:54:29.441860 env[1319]: time="2025-08-13T00:54:29.441626427Z" level=info msg="Starting up" Aug 13 00:54:29.445157 env[1319]: time="2025-08-13T00:54:29.445089710Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:29.445430 env[1319]: time="2025-08-13T00:54:29.445400071Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:29.445645 env[1319]: time="2025-08-13T00:54:29.445608524Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:29.445796 env[1319]: time="2025-08-13T00:54:29.445771181Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:29.450534 env[1319]: time="2025-08-13T00:54:29.450432951Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 00:54:29.450801 env[1319]: time="2025-08-13T00:54:29.450768693Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 00:54:29.450949 env[1319]: time="2025-08-13T00:54:29.450921244Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 00:54:29.451113 env[1319]: time="2025-08-13T00:54:29.451087154Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 00:54:29.462290 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2228393215-merged.mount: Deactivated successfully. Aug 13 00:54:29.573067 systemd[1]: Started kubelet.service. Aug 13 00:54:29.584812 env[1319]: time="2025-08-13T00:54:29.584674691Z" level=info msg="Loading containers: start." Aug 13 00:54:29.681488 kubelet[1330]: E0813 00:54:29.681423 1330 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:29.686528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:29.686770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:29.838002 kernel: Initializing XFRM netlink socket Aug 13 00:54:29.899159 env[1319]: time="2025-08-13T00:54:29.899075590Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 00:54:30.034479 systemd-networkd[1008]: docker0: Link UP Aug 13 00:54:30.081218 env[1319]: time="2025-08-13T00:54:30.080623168Z" level=info msg="Loading containers: done." Aug 13 00:54:30.126576 env[1319]: time="2025-08-13T00:54:30.126470695Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:54:30.126982 env[1319]: time="2025-08-13T00:54:30.126749242Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 00:54:30.127173 env[1319]: time="2025-08-13T00:54:30.127032370Z" level=info msg="Daemon has completed initialization" Aug 13 00:54:30.172517 systemd[1]: Started docker.service. Aug 13 00:54:30.191139 env[1319]: time="2025-08-13T00:54:30.190994786Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:54:30.238784 systemd[1]: Starting coreos-metadata.service... Aug 13 00:54:30.316351 coreos-metadata[1445]: Aug 13 00:54:30.315 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Aug 13 00:54:30.332926 coreos-metadata[1445]: Aug 13 00:54:30.332 INFO Fetch successful Aug 13 00:54:30.357997 systemd[1]: Finished coreos-metadata.service. Aug 13 00:54:31.509548 env[1190]: time="2025-08-13T00:54:31.509441822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 00:54:32.170123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632381705.mount: Deactivated successfully. Aug 13 00:54:34.649379 env[1190]: time="2025-08-13T00:54:34.649303344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:34.651411 env[1190]: time="2025-08-13T00:54:34.651341592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:34.654297 env[1190]: time="2025-08-13T00:54:34.654230259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:34.656706 env[1190]: time="2025-08-13T00:54:34.656643173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:34.657991 env[1190]: time="2025-08-13T00:54:34.657913386Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 00:54:34.659051 env[1190]: time="2025-08-13T00:54:34.659016943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 00:54:36.878482 env[1190]: time="2025-08-13T00:54:36.878377272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.880816 env[1190]: time="2025-08-13T00:54:36.880753344Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.883694 env[1190]: time="2025-08-13T00:54:36.883618683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.886235 env[1190]: time="2025-08-13T00:54:36.886166769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:36.887855 env[1190]: time="2025-08-13T00:54:36.887779625Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 00:54:36.888621 env[1190]: time="2025-08-13T00:54:36.888590974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 00:54:38.768535 env[1190]: time="2025-08-13T00:54:38.768434977Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.771030 env[1190]: time="2025-08-13T00:54:38.770916153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.775373 env[1190]: time="2025-08-13T00:54:38.775289879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.778273 env[1190]: time="2025-08-13T00:54:38.778196116Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:38.780267 env[1190]: time="2025-08-13T00:54:38.780184121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 00:54:38.781198 env[1190]: time="2025-08-13T00:54:38.781150925Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 00:54:39.930714 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:54:39.931232 systemd[1]: Stopped kubelet.service. Aug 13 00:54:39.934275 systemd[1]: Starting kubelet.service... Aug 13 00:54:40.119069 systemd[1]: Started kubelet.service. Aug 13 00:54:40.236771 kubelet[1467]: E0813 00:54:40.236589 1467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:40.239838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:40.240084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:40.357785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701568389.mount: Deactivated successfully. Aug 13 00:54:41.402562 env[1190]: time="2025-08-13T00:54:41.402461595Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:41.405009 env[1190]: time="2025-08-13T00:54:41.404927052Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:41.406564 env[1190]: time="2025-08-13T00:54:41.406516470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:41.408076 env[1190]: time="2025-08-13T00:54:41.408021399Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:41.408696 env[1190]: time="2025-08-13T00:54:41.408658268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 00:54:41.409381 env[1190]: time="2025-08-13T00:54:41.409351113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:54:42.039207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110887122.mount: Deactivated successfully. Aug 13 00:54:43.517449 env[1190]: time="2025-08-13T00:54:43.517319554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:43.529145 env[1190]: time="2025-08-13T00:54:43.529055434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:43.534341 env[1190]: time="2025-08-13T00:54:43.534258701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:43.539511 env[1190]: time="2025-08-13T00:54:43.539427234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:43.540420 env[1190]: time="2025-08-13T00:54:43.540303596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 00:54:43.542823 env[1190]: time="2025-08-13T00:54:43.542767113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:54:44.496627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3423189870.mount: Deactivated successfully. Aug 13 00:54:44.546595 env[1190]: time="2025-08-13T00:54:44.542015480Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:44.564386 env[1190]: time="2025-08-13T00:54:44.564301189Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:44.583361 env[1190]: time="2025-08-13T00:54:44.580938750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:44.594047 env[1190]: time="2025-08-13T00:54:44.591956215Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:44.604474 env[1190]: time="2025-08-13T00:54:44.604349673Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:54:44.631724 env[1190]: time="2025-08-13T00:54:44.631616752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 00:54:45.205034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997242060.mount: Deactivated successfully. Aug 13 00:54:48.536048 env[1190]: time="2025-08-13T00:54:48.535923218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:48.541137 env[1190]: time="2025-08-13T00:54:48.541066433Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:48.546180 env[1190]: time="2025-08-13T00:54:48.544402100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:48.550457 env[1190]: time="2025-08-13T00:54:48.550372326Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 00:54:48.551561 env[1190]: time="2025-08-13T00:54:48.551508610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:50.430589 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:54:50.431041 systemd[1]: Stopped kubelet.service. Aug 13 00:54:50.433587 systemd[1]: Starting kubelet.service... Aug 13 00:54:50.936895 systemd[1]: Started kubelet.service. Aug 13 00:54:51.033270 kubelet[1494]: E0813 00:54:51.033175 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:54:51.035408 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:54:51.035622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:54:52.871429 systemd[1]: Stopped kubelet.service. Aug 13 00:54:52.875247 systemd[1]: Starting kubelet.service... Aug 13 00:54:52.924061 systemd[1]: Reloading. Aug 13 00:54:53.086033 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-08-13T00:54:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:54:53.086848 /usr/lib/systemd/system-generators/torcx-generator[1527]: time="2025-08-13T00:54:53Z" level=info msg="torcx already run" Aug 13 00:54:53.198128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:54:53.198155 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:54:53.224449 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:54:53.362423 systemd[1]: Stopping kubelet.service... Aug 13 00:54:53.364128 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:54:53.364337 systemd[1]: Stopped kubelet.service. Aug 13 00:54:53.366318 systemd[1]: Starting kubelet.service... Aug 13 00:54:53.556386 systemd[1]: Started kubelet.service. Aug 13 00:54:53.640442 kubelet[1580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:53.641011 kubelet[1580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:54:53.641154 kubelet[1580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:54:53.641611 kubelet[1580]: I0813 00:54:53.641560 1580 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:54:54.180052 kubelet[1580]: I0813 00:54:54.179698 1580 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:54:54.180052 kubelet[1580]: I0813 00:54:54.180045 1580 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:54:54.180925 kubelet[1580]: I0813 00:54:54.180869 1580 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:54:54.242689 kubelet[1580]: E0813 00:54:54.242617 1580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.229.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:54.245287 kubelet[1580]: I0813 00:54:54.245218 1580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:54:54.262472 kubelet[1580]: E0813 00:54:54.262412 1580 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:54:54.262849 kubelet[1580]: I0813 00:54:54.262824 1580 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:54:54.267797 kubelet[1580]: I0813 00:54:54.267749 1580 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:54:54.268664 kubelet[1580]: I0813 00:54:54.268598 1580 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:54:54.269252 kubelet[1580]: I0813 00:54:54.268820 1580 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-f-585a890caa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:54:54.269619 kubelet[1580]: I0813 00:54:54.269597 1580 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:54:54.269733 kubelet[1580]: I0813 00:54:54.269717 1580 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:54:54.270103 kubelet[1580]: I0813 00:54:54.270079 1580 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:54.289550 kubelet[1580]: I0813 00:54:54.289483 1580 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:54:54.290034 kubelet[1580]: I0813 00:54:54.289844 1580 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:54:54.290288 kubelet[1580]: I0813 00:54:54.290269 1580 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:54:54.290423 kubelet[1580]: I0813 00:54:54.290405 1580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:54:54.303838 kubelet[1580]: W0813 00:54:54.303114 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.229.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-f-585a890caa&limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:54.303838 kubelet[1580]: E0813 00:54:54.303239 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.229.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-f-585a890caa&limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:54.304238 kubelet[1580]: W0813 00:54:54.304177 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.229.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:54.304371 kubelet[1580]: E0813 00:54:54.304256 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.229.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:54.304457 kubelet[1580]: I0813 00:54:54.304417 1580 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:54:54.305314 kubelet[1580]: I0813 00:54:54.305249 1580 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:54:54.310411 kubelet[1580]: W0813 00:54:54.310320 1580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:54:54.317202 kubelet[1580]: I0813 00:54:54.317127 1580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:54:54.317441 kubelet[1580]: I0813 00:54:54.317248 1580 server.go:1287] "Started kubelet" Aug 13 00:54:54.338787 kubelet[1580]: E0813 00:54:54.337169 1580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.229.35:6443/api/v1/namespaces/default/events\": dial tcp 143.198.229.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-f-585a890caa.185b2d7cea1200e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-f-585a890caa,UID:ci-3510.3.8-f-585a890caa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-f-585a890caa,},FirstTimestamp:2025-08-13 00:54:54.317183205 +0000 UTC m=+0.749548952,LastTimestamp:2025-08-13 00:54:54.317183205 +0000 UTC m=+0.749548952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-f-585a890caa,}" Aug 13 00:54:54.341021 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Aug 13 00:54:54.342200 kubelet[1580]: I0813 00:54:54.341410 1580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:54:54.345199 kubelet[1580]: E0813 00:54:54.345166 1580 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:54:54.348003 kubelet[1580]: I0813 00:54:54.347918 1580 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:54:54.349709 kubelet[1580]: I0813 00:54:54.349676 1580 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:54:54.351405 kubelet[1580]: I0813 00:54:54.351318 1580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:54:54.351888 kubelet[1580]: I0813 00:54:54.351864 1580 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:54:54.352451 kubelet[1580]: I0813 00:54:54.352425 1580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:54:54.354838 kubelet[1580]: I0813 00:54:54.354811 1580 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:54:54.356916 kubelet[1580]: E0813 00:54:54.355311 1580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-f-585a890caa\" not found" Aug 13 00:54:54.357166 kubelet[1580]: I0813 00:54:54.356156 1580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:54:54.357288 kubelet[1580]: I0813 00:54:54.356379 1580 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:54:54.357502 kubelet[1580]: I0813 00:54:54.357481 1580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:54:54.358167 kubelet[1580]: I0813 00:54:54.358149 1580 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:54:54.358328 kubelet[1580]: E0813 00:54:54.356575 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-f-585a890caa?timeout=10s\": dial tcp 143.198.229.35:6443: connect: connection refused" interval="200ms" Aug 13 00:54:54.359283 kubelet[1580]: W0813 00:54:54.359231 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.229.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:54.359477 kubelet[1580]: E0813 00:54:54.359437 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.229.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:54.360846 kubelet[1580]: I0813 00:54:54.360826 1580 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:54:54.385485 kubelet[1580]: I0813 00:54:54.385443 1580 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:54:54.385814 kubelet[1580]: I0813 00:54:54.385791 1580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:54:54.385977 kubelet[1580]: I0813 00:54:54.385943 1580 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:54:54.389224 kubelet[1580]: I0813 00:54:54.389186 1580 policy_none.go:49] "None policy: Start" Aug 13 00:54:54.389767 kubelet[1580]: I0813 00:54:54.389443 1580 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:54:54.389903 kubelet[1580]: I0813 00:54:54.389887 1580 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:54:54.401608 systemd[1]: Created slice kubepods.slice. Aug 13 00:54:54.410537 kubelet[1580]: I0813 00:54:54.410447 1580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:54:54.412617 kubelet[1580]: I0813 00:54:54.412551 1580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:54:54.412824 kubelet[1580]: I0813 00:54:54.412650 1580 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:54:54.412824 kubelet[1580]: I0813 00:54:54.412731 1580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:54:54.412824 kubelet[1580]: I0813 00:54:54.412753 1580 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:54:54.412977 kubelet[1580]: E0813 00:54:54.412855 1580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:54:54.420055 kubelet[1580]: W0813 00:54:54.420001 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.229.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:54.421713 kubelet[1580]: E0813 00:54:54.421653 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.229.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:54.425843 systemd[1]: Created slice kubepods-burstable.slice. Aug 13 00:54:54.432751 systemd[1]: Created slice kubepods-besteffort.slice. Aug 13 00:54:54.440828 kubelet[1580]: I0813 00:54:54.440782 1580 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:54:54.441181 kubelet[1580]: I0813 00:54:54.441152 1580 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:54:54.441272 kubelet[1580]: I0813 00:54:54.441197 1580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:54:54.443532 kubelet[1580]: I0813 00:54:54.443485 1580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:54:54.444497 kubelet[1580]: E0813 00:54:54.444463 1580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:54:54.444775 kubelet[1580]: E0813 00:54:54.444747 1580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-f-585a890caa\" not found" Aug 13 00:54:54.526827 systemd[1]: Created slice kubepods-burstable-pod0fd5a8f9f70b8712996cfbd19368b728.slice. Aug 13 00:54:54.535292 kubelet[1580]: E0813 00:54:54.535241 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.537870 systemd[1]: Created slice kubepods-burstable-pod7f8ee2469dd421df869f8166ed02c51d.slice. Aug 13 00:54:54.540515 kubelet[1580]: E0813 00:54:54.540475 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.542668 kubelet[1580]: I0813 00:54:54.542628 1580 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.543518 kubelet[1580]: E0813 00:54:54.543467 1580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.229.35:6443/api/v1/nodes\": dial tcp 143.198.229.35:6443: connect: connection refused" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.550809 systemd[1]: Created slice kubepods-burstable-podf96c4ff3da42229d8782a35db4a89aa9.slice. Aug 13 00:54:54.554158 kubelet[1580]: E0813 00:54:54.554107 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.559537 kubelet[1580]: I0813 00:54:54.559468 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.559901 kubelet[1580]: I0813 00:54:54.559861 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560122 kubelet[1580]: I0813 00:54:54.560087 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560284 kubelet[1580]: I0813 00:54:54.560259 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560406 kubelet[1580]: I0813 00:54:54.560383 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560551 kubelet[1580]: I0813 00:54:54.560527 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560680 kubelet[1580]: I0813 00:54:54.560658 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.560838 kubelet[1580]: I0813 00:54:54.560811 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.561006 kubelet[1580]: E0813 00:54:54.560441 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-f-585a890caa?timeout=10s\": dial tcp 143.198.229.35:6443: connect: connection refused" interval="400ms" Aug 13 00:54:54.561006 kubelet[1580]: I0813 00:54:54.560980 1580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8ee2469dd421df869f8166ed02c51d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-f-585a890caa\" (UID: \"7f8ee2469dd421df869f8166ed02c51d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.747309 kubelet[1580]: I0813 00:54:54.745811 1580 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.747920 kubelet[1580]: E0813 00:54:54.747881 1580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.229.35:6443/api/v1/nodes\": dial tcp 143.198.229.35:6443: connect: connection refused" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:54.836823 kubelet[1580]: E0813 00:54:54.836776 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:54.838875 env[1190]: time="2025-08-13T00:54:54.838339602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-f-585a890caa,Uid:0fd5a8f9f70b8712996cfbd19368b728,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:54.843699 kubelet[1580]: E0813 00:54:54.843359 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:54.844097 env[1190]: time="2025-08-13T00:54:54.844025845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-f-585a890caa,Uid:7f8ee2469dd421df869f8166ed02c51d,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:54.855537 kubelet[1580]: E0813 00:54:54.855424 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:54.856436 env[1190]: time="2025-08-13T00:54:54.856347759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-f-585a890caa,Uid:f96c4ff3da42229d8782a35db4a89aa9,Namespace:kube-system,Attempt:0,}" Aug 13 00:54:54.962380 kubelet[1580]: E0813 00:54:54.962315 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-f-585a890caa?timeout=10s\": dial tcp 143.198.229.35:6443: connect: connection refused" interval="800ms" Aug 13 00:54:55.151335 kubelet[1580]: I0813 00:54:55.150301 1580 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:55.151335 kubelet[1580]: E0813 00:54:55.150750 1580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.229.35:6443/api/v1/nodes\": dial tcp 143.198.229.35:6443: connect: connection refused" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:55.360664 kubelet[1580]: W0813 00:54:55.360542 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://143.198.229.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:55.360664 kubelet[1580]: E0813 00:54:55.360621 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://143.198.229.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:55.453658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094743994.mount: Deactivated successfully. Aug 13 00:54:55.460630 env[1190]: time="2025-08-13T00:54:55.460570715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.466172 env[1190]: time="2025-08-13T00:54:55.466105684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.467942 env[1190]: time="2025-08-13T00:54:55.467880186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.469428 env[1190]: time="2025-08-13T00:54:55.469377061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.473624 env[1190]: time="2025-08-13T00:54:55.473558446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.482485 env[1190]: time="2025-08-13T00:54:55.482389466Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.483855 env[1190]: time="2025-08-13T00:54:55.483805514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.487637 env[1190]: time="2025-08-13T00:54:55.487560766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.488727 env[1190]: time="2025-08-13T00:54:55.488680780Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.489378 env[1190]: time="2025-08-13T00:54:55.489340845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.489937 env[1190]: time="2025-08-13T00:54:55.489907174Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.490496 env[1190]: time="2025-08-13T00:54:55.490464090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:54:55.492624 kubelet[1580]: W0813 00:54:55.492466 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://143.198.229.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:55.492624 kubelet[1580]: E0813 00:54:55.492567 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://143.198.229.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:55.549224 env[1190]: time="2025-08-13T00:54:55.549116345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:55.549224 env[1190]: time="2025-08-13T00:54:55.549220998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:55.549606 env[1190]: time="2025-08-13T00:54:55.549245027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:55.549606 env[1190]: time="2025-08-13T00:54:55.549377957Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6106a6a54cf7c383a86313ffa89c4e0cb0289af4f0f73c234e7ee7b9aa967a6c pid=1630 runtime=io.containerd.runc.v2 Aug 13 00:54:55.555901 env[1190]: time="2025-08-13T00:54:55.555751894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:55.556147 env[1190]: time="2025-08-13T00:54:55.556099870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:55.556613 env[1190]: time="2025-08-13T00:54:55.556551868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:55.563150 env[1190]: time="2025-08-13T00:54:55.563045031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f2b7cbe1d8bc20b3979954f81bfb24d66a85086f3bde7c5b5bc5d1b78453d96 pid=1622 runtime=io.containerd.runc.v2 Aug 13 00:54:55.568427 env[1190]: time="2025-08-13T00:54:55.568299552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:54:55.568427 env[1190]: time="2025-08-13T00:54:55.568362227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:54:55.568427 env[1190]: time="2025-08-13T00:54:55.568386843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:54:55.569924 env[1190]: time="2025-08-13T00:54:55.569841164Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/995fb8efedf880c5216987d7c57138e8d9579869297b3d3e825585555607591a pid=1655 runtime=io.containerd.runc.v2 Aug 13 00:54:55.591086 systemd[1]: Started cri-containerd-6106a6a54cf7c383a86313ffa89c4e0cb0289af4f0f73c234e7ee7b9aa967a6c.scope. Aug 13 00:54:55.604987 systemd[1]: Started cri-containerd-4f2b7cbe1d8bc20b3979954f81bfb24d66a85086f3bde7c5b5bc5d1b78453d96.scope. Aug 13 00:54:55.631207 kubelet[1580]: W0813 00:54:55.631121 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://143.198.229.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-f-585a890caa&limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:55.631435 kubelet[1580]: E0813 00:54:55.631224 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://143.198.229.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-f-585a890caa&limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:55.655014 systemd[1]: Started cri-containerd-995fb8efedf880c5216987d7c57138e8d9579869297b3d3e825585555607591a.scope. Aug 13 00:54:55.716246 env[1190]: time="2025-08-13T00:54:55.716101778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-f-585a890caa,Uid:0fd5a8f9f70b8712996cfbd19368b728,Namespace:kube-system,Attempt:0,} returns sandbox id \"6106a6a54cf7c383a86313ffa89c4e0cb0289af4f0f73c234e7ee7b9aa967a6c\"" Aug 13 00:54:55.719601 kubelet[1580]: E0813 00:54:55.719544 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:55.726055 env[1190]: time="2025-08-13T00:54:55.724462566Z" level=info msg="CreateContainer within sandbox \"6106a6a54cf7c383a86313ffa89c4e0cb0289af4f0f73c234e7ee7b9aa967a6c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:54:55.750818 env[1190]: time="2025-08-13T00:54:55.750744271Z" level=info msg="CreateContainer within sandbox \"6106a6a54cf7c383a86313ffa89c4e0cb0289af4f0f73c234e7ee7b9aa967a6c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"093d56701b27f68b273aa351929e78ad0414370a71d3ce64939d68c9c531abcd\"" Aug 13 00:54:55.752129 env[1190]: time="2025-08-13T00:54:55.752075530Z" level=info msg="StartContainer for \"093d56701b27f68b273aa351929e78ad0414370a71d3ce64939d68c9c531abcd\"" Aug 13 00:54:55.766104 env[1190]: time="2025-08-13T00:54:55.762883919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-f-585a890caa,Uid:f96c4ff3da42229d8782a35db4a89aa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f2b7cbe1d8bc20b3979954f81bfb24d66a85086f3bde7c5b5bc5d1b78453d96\"" Aug 13 00:54:55.767112 kubelet[1580]: E0813 00:54:55.766775 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:55.767112 kubelet[1580]: E0813 00:54:55.766886 1580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.198.229.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-f-585a890caa?timeout=10s\": dial tcp 143.198.229.35:6443: connect: connection refused" interval="1.6s" Aug 13 00:54:55.770834 env[1190]: time="2025-08-13T00:54:55.770782603Z" level=info msg="CreateContainer within sandbox \"4f2b7cbe1d8bc20b3979954f81bfb24d66a85086f3bde7c5b5bc5d1b78453d96\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:54:55.778492 kubelet[1580]: W0813 00:54:55.778370 1580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://143.198.229.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 143.198.229.35:6443: connect: connection refused Aug 13 00:54:55.778492 kubelet[1580]: E0813 00:54:55.778430 1580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://143.198.229.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:55.792457 env[1190]: time="2025-08-13T00:54:55.792396478Z" level=info msg="CreateContainer within sandbox \"4f2b7cbe1d8bc20b3979954f81bfb24d66a85086f3bde7c5b5bc5d1b78453d96\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a1e28948557d51b18bc5e25eb501198349d79840ce4272f806924d47a06abc35\"" Aug 13 00:54:55.793756 env[1190]: time="2025-08-13T00:54:55.793702896Z" level=info msg="StartContainer for \"a1e28948557d51b18bc5e25eb501198349d79840ce4272f806924d47a06abc35\"" Aug 13 00:54:55.804877 systemd[1]: Started cri-containerd-093d56701b27f68b273aa351929e78ad0414370a71d3ce64939d68c9c531abcd.scope. Aug 13 00:54:55.810210 env[1190]: time="2025-08-13T00:54:55.810050561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-f-585a890caa,Uid:7f8ee2469dd421df869f8166ed02c51d,Namespace:kube-system,Attempt:0,} returns sandbox id \"995fb8efedf880c5216987d7c57138e8d9579869297b3d3e825585555607591a\"" Aug 13 00:54:55.816022 kubelet[1580]: E0813 00:54:55.813970 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:55.819674 env[1190]: time="2025-08-13T00:54:55.819611610Z" level=info msg="CreateContainer within sandbox \"995fb8efedf880c5216987d7c57138e8d9579869297b3d3e825585555607591a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:54:55.839328 env[1190]: time="2025-08-13T00:54:55.839249258Z" level=info msg="CreateContainer within sandbox \"995fb8efedf880c5216987d7c57138e8d9579869297b3d3e825585555607591a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ce2061fb8e4f78d882f54458fb6504d890a4f09e99311383643240a0fd29486\"" Aug 13 00:54:55.840605 env[1190]: time="2025-08-13T00:54:55.840563985Z" level=info msg="StartContainer for \"4ce2061fb8e4f78d882f54458fb6504d890a4f09e99311383643240a0fd29486\"" Aug 13 00:54:55.861314 systemd[1]: Started cri-containerd-a1e28948557d51b18bc5e25eb501198349d79840ce4272f806924d47a06abc35.scope. Aug 13 00:54:55.879450 kubelet[1580]: E0813 00:54:55.879303 1580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.198.229.35:6443/api/v1/namespaces/default/events\": dial tcp 143.198.229.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-f-585a890caa.185b2d7cea1200e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-f-585a890caa,UID:ci-3510.3.8-f-585a890caa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-f-585a890caa,},FirstTimestamp:2025-08-13 00:54:54.317183205 +0000 UTC m=+0.749548952,LastTimestamp:2025-08-13 00:54:54.317183205 +0000 UTC m=+0.749548952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-f-585a890caa,}" Aug 13 00:54:55.889951 env[1190]: time="2025-08-13T00:54:55.889848454Z" level=info msg="StartContainer for \"093d56701b27f68b273aa351929e78ad0414370a71d3ce64939d68c9c531abcd\" returns successfully" Aug 13 00:54:55.936816 systemd[1]: Started cri-containerd-4ce2061fb8e4f78d882f54458fb6504d890a4f09e99311383643240a0fd29486.scope. Aug 13 00:54:55.952908 kubelet[1580]: I0813 00:54:55.952380 1580 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:55.952908 kubelet[1580]: E0813 00:54:55.952856 1580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.198.229.35:6443/api/v1/nodes\": dial tcp 143.198.229.35:6443: connect: connection refused" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:55.989215 env[1190]: time="2025-08-13T00:54:55.989058576Z" level=info msg="StartContainer for \"a1e28948557d51b18bc5e25eb501198349d79840ce4272f806924d47a06abc35\" returns successfully" Aug 13 00:54:56.052444 env[1190]: time="2025-08-13T00:54:56.052386421Z" level=info msg="StartContainer for \"4ce2061fb8e4f78d882f54458fb6504d890a4f09e99311383643240a0fd29486\" returns successfully" Aug 13 00:54:56.400257 kubelet[1580]: E0813 00:54:56.400094 1580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://143.198.229.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.198.229.35:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:54:56.434122 kubelet[1580]: E0813 00:54:56.434082 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:56.434465 kubelet[1580]: E0813 00:54:56.434448 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:56.445050 kubelet[1580]: E0813 00:54:56.444918 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:56.449357 kubelet[1580]: E0813 00:54:56.449307 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:56.454211 kubelet[1580]: E0813 00:54:56.454175 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:56.454625 kubelet[1580]: E0813 00:54:56.454606 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:57.457133 kubelet[1580]: E0813 00:54:57.457081 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:57.457632 kubelet[1580]: E0813 00:54:57.457310 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:57.457857 kubelet[1580]: E0813 00:54:57.457826 1580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:57.458157 kubelet[1580]: E0813 00:54:57.458135 1580 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:54:57.555014 kubelet[1580]: I0813 00:54:57.554973 1580 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.831233 kubelet[1580]: E0813 00:54:58.831178 1580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-f-585a890caa\" not found" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.882679 kubelet[1580]: I0813 00:54:58.882623 1580 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.956894 kubelet[1580]: I0813 00:54:58.956838 1580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.972587 kubelet[1580]: E0813 00:54:58.972545 1580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.972846 kubelet[1580]: I0813 00:54:58.972827 1580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.975633 kubelet[1580]: E0813 00:54:58.975577 1580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-f-585a890caa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.975874 kubelet[1580]: I0813 00:54:58.975856 1580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:54:58.979918 kubelet[1580]: E0813 00:54:58.979849 1580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-f-585a890caa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:54:59.306795 kubelet[1580]: I0813 00:54:59.306707 1580 apiserver.go:52] "Watching apiserver" Aug 13 00:54:59.357729 kubelet[1580]: I0813 00:54:59.357667 1580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:55:01.041053 systemd[1]: Reloading. Aug 13 00:55:01.214280 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-08-13T00:55:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 00:55:01.214326 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-08-13T00:55:01Z" level=info msg="torcx already run" Aug 13 00:55:01.288910 update_engine[1183]: I0813 00:55:01.288203 1183 update_attempter.cc:509] Updating boot flags... Aug 13 00:55:01.493062 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 00:55:01.493089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 00:55:01.539030 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:55:01.869066 systemd[1]: Stopping kubelet.service... Aug 13 00:55:01.887216 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:55:01.887875 systemd[1]: Stopped kubelet.service. Aug 13 00:55:01.888197 systemd[1]: kubelet.service: Consumed 1.297s CPU time. Aug 13 00:55:01.894449 systemd[1]: Starting kubelet.service... Aug 13 00:55:03.539615 systemd[1]: Started kubelet.service. Aug 13 00:55:03.658762 kubelet[1938]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:55:03.659429 kubelet[1938]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:55:03.659504 kubelet[1938]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:55:03.661403 kubelet[1938]: I0813 00:55:03.661327 1938 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:55:03.682762 kubelet[1938]: I0813 00:55:03.682702 1938 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 00:55:03.683117 kubelet[1938]: I0813 00:55:03.683090 1938 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:55:03.684184 kubelet[1938]: I0813 00:55:03.684145 1938 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 00:55:03.688193 kubelet[1938]: I0813 00:55:03.688155 1938 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:55:03.705610 kubelet[1938]: I0813 00:55:03.705555 1938 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:55:03.725852 sudo[1952]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:55:03.726715 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Aug 13 00:55:03.731812 kubelet[1938]: E0813 00:55:03.731764 1938 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:55:03.732258 kubelet[1938]: I0813 00:55:03.732231 1938 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:55:03.743112 kubelet[1938]: I0813 00:55:03.743058 1938 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:55:03.744027 kubelet[1938]: I0813 00:55:03.743928 1938 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:55:03.750747 kubelet[1938]: I0813 00:55:03.744228 1938 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-f-585a890caa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:55:03.751419 kubelet[1938]: I0813 00:55:03.751377 1938 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:55:03.751800 kubelet[1938]: I0813 00:55:03.751779 1938 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 00:55:03.752070 kubelet[1938]: I0813 00:55:03.752052 1938 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:55:03.752527 kubelet[1938]: I0813 00:55:03.752505 1938 kubelet.go:446] "Attempting to sync node with API server" Aug 13 00:55:03.752676 kubelet[1938]: I0813 00:55:03.752656 1938 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:55:03.752830 kubelet[1938]: I0813 00:55:03.752811 1938 kubelet.go:352] "Adding apiserver pod source" Aug 13 00:55:03.752942 kubelet[1938]: I0813 00:55:03.752925 1938 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:55:03.765608 kubelet[1938]: I0813 00:55:03.765566 1938 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 00:55:03.766540 kubelet[1938]: I0813 00:55:03.766507 1938 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:55:03.768623 kubelet[1938]: I0813 00:55:03.768587 1938 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:55:03.770383 kubelet[1938]: I0813 00:55:03.770356 1938 server.go:1287] "Started kubelet" Aug 13 00:55:03.773240 kubelet[1938]: I0813 00:55:03.773184 1938 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:55:03.774417 kubelet[1938]: I0813 00:55:03.774392 1938 server.go:479] "Adding debug handlers to kubelet server" Aug 13 00:55:03.778018 kubelet[1938]: I0813 00:55:03.777921 1938 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:55:03.781507 kubelet[1938]: I0813 00:55:03.781471 1938 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:55:03.783952 kubelet[1938]: I0813 00:55:03.783917 1938 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:55:03.791204 kubelet[1938]: E0813 00:55:03.791065 1938 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:55:03.792889 kubelet[1938]: I0813 00:55:03.792852 1938 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:55:03.801195 kubelet[1938]: I0813 00:55:03.801158 1938 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:55:03.801878 kubelet[1938]: E0813 00:55:03.801838 1938 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-f-585a890caa\" not found" Aug 13 00:55:03.802321 kubelet[1938]: I0813 00:55:03.802302 1938 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:55:03.802588 kubelet[1938]: I0813 00:55:03.802571 1938 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:55:03.808899 kubelet[1938]: I0813 00:55:03.808854 1938 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:55:03.809379 kubelet[1938]: I0813 00:55:03.809334 1938 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:55:03.811780 kubelet[1938]: I0813 00:55:03.811750 1938 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:55:03.834163 kubelet[1938]: I0813 00:55:03.834098 1938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:55:03.836000 kubelet[1938]: I0813 00:55:03.835920 1938 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:55:03.836399 kubelet[1938]: I0813 00:55:03.836378 1938 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 00:55:03.836625 kubelet[1938]: I0813 00:55:03.836574 1938 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:55:03.836793 kubelet[1938]: I0813 00:55:03.836754 1938 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 00:55:03.837080 kubelet[1938]: E0813 00:55:03.837055 1938 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:55:03.936238 kubelet[1938]: I0813 00:55:03.936180 1938 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:55:03.936238 kubelet[1938]: I0813 00:55:03.936206 1938 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:55:03.936238 kubelet[1938]: I0813 00:55:03.936235 1938 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:55:03.936711 kubelet[1938]: I0813 00:55:03.936669 1938 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:55:03.936771 kubelet[1938]: I0813 00:55:03.936732 1938 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:55:03.936771 kubelet[1938]: I0813 00:55:03.936767 1938 policy_none.go:49] "None policy: Start" Aug 13 00:55:03.936884 kubelet[1938]: I0813 00:55:03.936817 1938 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:55:03.936884 kubelet[1938]: I0813 00:55:03.936841 1938 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:55:03.938200 kubelet[1938]: I0813 00:55:03.938166 1938 state_mem.go:75] "Updated machine memory state" Aug 13 00:55:03.939303 kubelet[1938]: E0813 00:55:03.939273 1938 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:55:03.949425 kubelet[1938]: I0813 00:55:03.949377 1938 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:55:03.952127 kubelet[1938]: I0813 00:55:03.952070 1938 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:55:03.960366 kubelet[1938]: E0813 00:55:03.960336 1938 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:55:03.961665 kubelet[1938]: I0813 00:55:03.958794 1938 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:55:03.962147 kubelet[1938]: I0813 00:55:03.962120 1938 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:55:04.076838 kubelet[1938]: I0813 00:55:04.076700 1938 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.110612 kubelet[1938]: I0813 00:55:04.110557 1938 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.111090 kubelet[1938]: I0813 00:55:04.111047 1938 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.141582 kubelet[1938]: I0813 00:55:04.141535 1938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.142404 kubelet[1938]: I0813 00:55:04.142376 1938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.143309 kubelet[1938]: I0813 00:55:04.143285 1938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.154340 kubelet[1938]: W0813 00:55:04.154239 1938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:55:04.159370 kubelet[1938]: W0813 00:55:04.159333 1938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:55:04.164805 kubelet[1938]: W0813 00:55:04.164766 1938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:55:04.208393 kubelet[1938]: I0813 00:55:04.208317 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f8ee2469dd421df869f8166ed02c51d-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-f-585a890caa\" (UID: \"7f8ee2469dd421df869f8166ed02c51d\") " pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.208777 kubelet[1938]: I0813 00:55:04.208720 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209001 kubelet[1938]: I0813 00:55:04.208937 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209269 kubelet[1938]: I0813 00:55:04.209229 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209417 kubelet[1938]: I0813 00:55:04.209394 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209581 kubelet[1938]: I0813 00:55:04.209556 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209737 kubelet[1938]: I0813 00:55:04.209713 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f96c4ff3da42229d8782a35db4a89aa9-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-f-585a890caa\" (UID: \"f96c4ff3da42229d8782a35db4a89aa9\") " pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.209888 kubelet[1938]: I0813 00:55:04.209865 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.210082 kubelet[1938]: I0813 00:55:04.210038 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fd5a8f9f70b8712996cfbd19368b728-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-f-585a890caa\" (UID: \"0fd5a8f9f70b8712996cfbd19368b728\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.455938 kubelet[1938]: E0813 00:55:04.455865 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.461736 kubelet[1938]: E0813 00:55:04.461061 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.466868 kubelet[1938]: E0813 00:55:04.466797 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.761700 kubelet[1938]: I0813 00:55:04.761506 1938 apiserver.go:52] "Watching apiserver" Aug 13 00:55:04.806139 kubelet[1938]: I0813 00:55:04.806077 1938 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:55:04.839861 sudo[1952]: pam_unix(sudo:session): session closed for user root Aug 13 00:55:04.877811 kubelet[1938]: I0813 00:55:04.877773 1938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.878520 kubelet[1938]: E0813 00:55:04.877800 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.878771 kubelet[1938]: I0813 00:55:04.878567 1938 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.898367 kubelet[1938]: W0813 00:55:04.898330 1938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:55:04.898685 kubelet[1938]: E0813 00:55:04.898661 1938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-f-585a890caa\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.899109 kubelet[1938]: E0813 00:55:04.899077 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.901606 kubelet[1938]: W0813 00:55:04.901552 1938 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 00:55:04.901981 kubelet[1938]: E0813 00:55:04.901924 1938 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-f-585a890caa\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" Aug 13 00:55:04.902327 kubelet[1938]: E0813 00:55:04.902305 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:04.973733 kubelet[1938]: I0813 00:55:04.973621 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-f-585a890caa" podStartSLOduration=0.973583569 podStartE2EDuration="973.583569ms" podCreationTimestamp="2025-08-13 00:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:04.954529768 +0000 UTC m=+1.401780262" watchObservedRunningTime="2025-08-13 00:55:04.973583569 +0000 UTC m=+1.420834074" Aug 13 00:55:05.005021 kubelet[1938]: I0813 00:55:05.004908 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-f-585a890caa" podStartSLOduration=1.004803088 podStartE2EDuration="1.004803088s" podCreationTimestamp="2025-08-13 00:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:04.974864903 +0000 UTC m=+1.422115408" watchObservedRunningTime="2025-08-13 00:55:05.004803088 +0000 UTC m=+1.452053596" Aug 13 00:55:05.036159 kubelet[1938]: I0813 00:55:05.035936 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-f-585a890caa" podStartSLOduration=1.035904404 podStartE2EDuration="1.035904404s" podCreationTimestamp="2025-08-13 00:55:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:05.007284348 +0000 UTC m=+1.454534850" watchObservedRunningTime="2025-08-13 00:55:05.035904404 +0000 UTC m=+1.483154914" Aug 13 00:55:05.688527 kubelet[1938]: I0813 00:55:05.688465 1938 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:55:05.689482 env[1190]: time="2025-08-13T00:55:05.689424114Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:55:05.690278 kubelet[1938]: I0813 00:55:05.690250 1938 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:55:05.879744 kubelet[1938]: E0813 00:55:05.879700 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:05.880725 kubelet[1938]: E0813 00:55:05.880691 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:06.337689 systemd[1]: Created slice kubepods-besteffort-pod28f837b6_a082_48ec_a0dc_3f3517c014bd.slice. Aug 13 00:55:06.358996 systemd[1]: Created slice kubepods-burstable-pod1688b4f8_63ad_4d8e_82d3_a6c2a3c8e036.slice. Aug 13 00:55:06.363578 kubelet[1938]: W0813 00:55:06.363542 1938 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:55:06.366541 kubelet[1938]: E0813 00:55:06.363794 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:55:06.366541 kubelet[1938]: W0813 00:55:06.363884 1938 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:55:06.366541 kubelet[1938]: E0813 00:55:06.363901 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:55:06.366541 kubelet[1938]: W0813 00:55:06.363998 1938 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:55:06.366769 kubelet[1938]: E0813 00:55:06.364013 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:55:06.428211 kubelet[1938]: I0813 00:55:06.428080 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f837b6-a082-48ec-a0dc-3f3517c014bd-lib-modules\") pod \"kube-proxy-dbnl5\" (UID: \"28f837b6-a082-48ec-a0dc-3f3517c014bd\") " pod="kube-system/kube-proxy-dbnl5" Aug 13 00:55:06.428566 kubelet[1938]: I0813 00:55:06.428247 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cni-path\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.428566 kubelet[1938]: I0813 00:55:06.428318 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-xtables-lock\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.428566 kubelet[1938]: I0813 00:55:06.428351 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f837b6-a082-48ec-a0dc-3f3517c014bd-xtables-lock\") pod \"kube-proxy-dbnl5\" (UID: \"28f837b6-a082-48ec-a0dc-3f3517c014bd\") " pod="kube-system/kube-proxy-dbnl5" Aug 13 00:55:06.428566 kubelet[1938]: I0813 00:55:06.428429 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqpj\" (UniqueName: \"kubernetes.io/projected/28f837b6-a082-48ec-a0dc-3f3517c014bd-kube-api-access-7tqpj\") pod \"kube-proxy-dbnl5\" (UID: \"28f837b6-a082-48ec-a0dc-3f3517c014bd\") " pod="kube-system/kube-proxy-dbnl5" Aug 13 00:55:06.428566 kubelet[1938]: I0813 00:55:06.428522 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-run\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.428798 kubelet[1938]: I0813 00:55:06.428574 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.428798 kubelet[1938]: I0813 00:55:06.428737 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-kernel\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.428889 kubelet[1938]: I0813 00:55:06.428820 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28f837b6-a082-48ec-a0dc-3f3517c014bd-kube-proxy\") pod \"kube-proxy-dbnl5\" (UID: \"28f837b6-a082-48ec-a0dc-3f3517c014bd\") " pod="kube-system/kube-proxy-dbnl5" Aug 13 00:55:06.428889 kubelet[1938]: I0813 00:55:06.428880 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-bpf-maps\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429032 kubelet[1938]: I0813 00:55:06.428911 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429088 kubelet[1938]: I0813 00:55:06.429058 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hubble-tls\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429158 kubelet[1938]: I0813 00:55:06.429128 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-net\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429235 kubelet[1938]: I0813 00:55:06.429183 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv7cn\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-kube-api-access-mv7cn\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429235 kubelet[1938]: I0813 00:55:06.429211 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hostproc\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429333 kubelet[1938]: I0813 00:55:06.429240 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-cgroup\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429333 kubelet[1938]: I0813 00:55:06.429287 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-etc-cni-netd\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.429333 kubelet[1938]: I0813 00:55:06.429311 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-lib-modules\") pod \"cilium-bf2qv\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " pod="kube-system/cilium-bf2qv" Aug 13 00:55:06.541509 kubelet[1938]: I0813 00:55:06.541461 1938 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 00:55:06.653197 kubelet[1938]: E0813 00:55:06.653043 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:06.654880 env[1190]: time="2025-08-13T00:55:06.654750768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbnl5,Uid:28f837b6-a082-48ec-a0dc-3f3517c014bd,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:06.707609 env[1190]: time="2025-08-13T00:55:06.706709318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:06.707609 env[1190]: time="2025-08-13T00:55:06.706799083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:06.707609 env[1190]: time="2025-08-13T00:55:06.706816867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:06.707609 env[1190]: time="2025-08-13T00:55:06.707245753Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e pid=1995 runtime=io.containerd.runc.v2 Aug 13 00:55:06.760497 systemd[1]: Started cri-containerd-ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e.scope. Aug 13 00:55:06.882330 kubelet[1938]: E0813 00:55:06.882273 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:06.922012 systemd[1]: Created slice kubepods-besteffort-pod8a43b934_62a6_4148_be81_43e0131241f4.slice. Aug 13 00:55:06.936998 kubelet[1938]: I0813 00:55:06.935751 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a43b934-62a6-4148-be81-43e0131241f4-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rch2n\" (UID: \"8a43b934-62a6-4148-be81-43e0131241f4\") " pod="kube-system/cilium-operator-6c4d7847fc-rch2n" Aug 13 00:55:06.936998 kubelet[1938]: I0813 00:55:06.935870 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqv9x\" (UniqueName: \"kubernetes.io/projected/8a43b934-62a6-4148-be81-43e0131241f4-kube-api-access-nqv9x\") pod \"cilium-operator-6c4d7847fc-rch2n\" (UID: \"8a43b934-62a6-4148-be81-43e0131241f4\") " pod="kube-system/cilium-operator-6c4d7847fc-rch2n" Aug 13 00:55:06.943823 kubelet[1938]: I0813 00:55:06.943728 1938 status_manager.go:890] "Failed to get status for pod" podUID="8a43b934-62a6-4148-be81-43e0131241f4" pod="kube-system/cilium-operator-6c4d7847fc-rch2n" err="pods \"cilium-operator-6c4d7847fc-rch2n\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" Aug 13 00:55:06.989652 env[1190]: time="2025-08-13T00:55:06.989586421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbnl5,Uid:28f837b6-a082-48ec-a0dc-3f3517c014bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e\"" Aug 13 00:55:06.991333 kubelet[1938]: E0813 00:55:06.991286 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:06.995322 env[1190]: time="2025-08-13T00:55:06.995267400Z" level=info msg="CreateContainer within sandbox \"ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:55:07.028266 env[1190]: time="2025-08-13T00:55:07.028172917Z" level=info msg="CreateContainer within sandbox \"ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a98f4e66dab92b4ff2aa0c45dca24b5dbc4a91f8319bc1dd36cf5c2ec9f00f31\"" Aug 13 00:55:07.029214 env[1190]: time="2025-08-13T00:55:07.029171828Z" level=info msg="StartContainer for \"a98f4e66dab92b4ff2aa0c45dca24b5dbc4a91f8319bc1dd36cf5c2ec9f00f31\"" Aug 13 00:55:07.074443 systemd[1]: Started cri-containerd-a98f4e66dab92b4ff2aa0c45dca24b5dbc4a91f8319bc1dd36cf5c2ec9f00f31.scope. Aug 13 00:55:07.125596 kubelet[1938]: E0813 00:55:07.125536 1938 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-bf2qv" podUID="1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" Aug 13 00:55:07.281300 env[1190]: time="2025-08-13T00:55:07.281226302Z" level=info msg="StartContainer for \"a98f4e66dab92b4ff2aa0c45dca24b5dbc4a91f8319bc1dd36cf5c2ec9f00f31\" returns successfully" Aug 13 00:55:07.533188 kubelet[1938]: E0813 00:55:07.532947 1938 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:55:07.533188 kubelet[1938]: E0813 00:55:07.533189 1938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path podName:1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036 nodeName:}" failed. No retries permitted until 2025-08-13 00:55:08.033156054 +0000 UTC m=+4.480406552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path") pod "cilium-bf2qv" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:55:07.534566 kubelet[1938]: E0813 00:55:07.534518 1938 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Aug 13 00:55:07.534891 kubelet[1938]: E0813 00:55:07.534865 1938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets podName:1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036 nodeName:}" failed. No retries permitted until 2025-08-13 00:55:08.034824982 +0000 UTC m=+4.482075488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets") pod "cilium-bf2qv" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:55:07.559404 systemd[1]: run-containerd-runc-k8s.io-ca74e5abe94f8ccf89d184ff85f21ed5dd333e6618d9ec145c24c620d0ce508e-runc.ivJlij.mount: Deactivated successfully. Aug 13 00:55:07.888109 kubelet[1938]: E0813 00:55:07.887679 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:07.909374 kubelet[1938]: I0813 00:55:07.909300 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dbnl5" podStartSLOduration=1.9092762030000001 podStartE2EDuration="1.909276203s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:07.908436281 +0000 UTC m=+4.355686829" watchObservedRunningTime="2025-08-13 00:55:07.909276203 +0000 UTC m=+4.356526722" Aug 13 00:55:07.941191 kubelet[1938]: I0813 00:55:07.941143 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-net\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.941577 kubelet[1938]: I0813 00:55:07.941551 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-bpf-maps\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.941731 kubelet[1938]: I0813 00:55:07.941713 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-xtables-lock\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.941834 kubelet[1938]: I0813 00:55:07.941817 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-run\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.941946 kubelet[1938]: I0813 00:55:07.941928 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv7cn\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-kube-api-access-mv7cn\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942063 kubelet[1938]: I0813 00:55:07.942048 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-cgroup\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942149 kubelet[1938]: I0813 00:55:07.942133 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hostproc\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942232 kubelet[1938]: I0813 00:55:07.942214 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cni-path\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942319 kubelet[1938]: I0813 00:55:07.942306 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-lib-modules\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942407 kubelet[1938]: I0813 00:55:07.942393 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hubble-tls\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942488 kubelet[1938]: I0813 00:55:07.942474 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-etc-cni-netd\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.942612 kubelet[1938]: I0813 00:55:07.942587 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-kernel\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:07.943488 kubelet[1938]: I0813 00:55:07.941484 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.943720 kubelet[1938]: I0813 00:55:07.941615 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.943969 kubelet[1938]: I0813 00:55:07.941736 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.944078 kubelet[1938]: I0813 00:55:07.941845 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.944189 kubelet[1938]: I0813 00:55:07.944167 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.944395 kubelet[1938]: I0813 00:55:07.944371 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hostproc" (OuterVolumeSpecName: "hostproc") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.944673 kubelet[1938]: I0813 00:55:07.944654 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cni-path" (OuterVolumeSpecName: "cni-path") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.944776 kubelet[1938]: I0813 00:55:07.944759 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.950101 systemd[1]: var-lib-kubelet-pods-1688b4f8\x2d63ad\x2d4d8e\x2d82d3\x2da6c2a3c8e036-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmv7cn.mount: Deactivated successfully. Aug 13 00:55:07.951416 kubelet[1938]: I0813 00:55:07.950526 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-kube-api-access-mv7cn" (OuterVolumeSpecName: "kube-api-access-mv7cn") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "kube-api-access-mv7cn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:07.951416 kubelet[1938]: I0813 00:55:07.950607 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.951416 kubelet[1938]: I0813 00:55:07.950626 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:55:07.956766 systemd[1]: var-lib-kubelet-pods-1688b4f8\x2d63ad\x2d4d8e\x2d82d3\x2da6c2a3c8e036-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:55:07.959479 kubelet[1938]: I0813 00:55:07.959405 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:55:08.044325 kubelet[1938]: I0813 00:55:08.044253 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-net\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.044702 kubelet[1938]: I0813 00:55:08.044665 1938 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-bpf-maps\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.044864 kubelet[1938]: I0813 00:55:08.044838 1938 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-xtables-lock\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045020 kubelet[1938]: I0813 00:55:08.044998 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-run\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045167 kubelet[1938]: I0813 00:55:08.045133 1938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mv7cn\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-kube-api-access-mv7cn\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045292 kubelet[1938]: I0813 00:55:08.045270 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-cgroup\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045402 kubelet[1938]: I0813 00:55:08.045383 1938 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hostproc\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045516 kubelet[1938]: I0813 00:55:08.045494 1938 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-hubble-tls\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045621 kubelet[1938]: I0813 00:55:08.045601 1938 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-etc-cni-netd\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045722 kubelet[1938]: I0813 00:55:08.045703 1938 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cni-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.045859 kubelet[1938]: I0813 00:55:08.045837 1938 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-lib-modules\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.046004 kubelet[1938]: I0813 00:55:08.045982 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-host-proc-sys-kernel\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.134205 kubelet[1938]: E0813 00:55:08.134157 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:08.137314 env[1190]: time="2025-08-13T00:55:08.137154070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rch2n,Uid:8a43b934-62a6-4148-be81-43e0131241f4,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:08.148869 kubelet[1938]: I0813 00:55:08.146784 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:08.148869 kubelet[1938]: I0813 00:55:08.147415 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path\") pod \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\" (UID: \"1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036\") " Aug 13 00:55:08.151125 kubelet[1938]: I0813 00:55:08.150929 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:55:08.157525 systemd[1]: var-lib-kubelet-pods-1688b4f8\x2d63ad\x2d4d8e\x2d82d3\x2da6c2a3c8e036-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:55:08.162104 kubelet[1938]: I0813 00:55:08.161915 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" (UID: "1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:55:08.175295 env[1190]: time="2025-08-13T00:55:08.174582931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:08.175295 env[1190]: time="2025-08-13T00:55:08.174738312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:08.175295 env[1190]: time="2025-08-13T00:55:08.174754726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:08.175295 env[1190]: time="2025-08-13T00:55:08.175196295Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9 pid=2214 runtime=io.containerd.runc.v2 Aug 13 00:55:08.193490 systemd[1]: Started cri-containerd-15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9.scope. Aug 13 00:55:08.248145 kubelet[1938]: I0813 00:55:08.248027 1938 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-clustermesh-secrets\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.248145 kubelet[1938]: I0813 00:55:08.248085 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036-cilium-config-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:55:08.272064 env[1190]: time="2025-08-13T00:55:08.272006684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rch2n,Uid:8a43b934-62a6-4148-be81-43e0131241f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\"" Aug 13 00:55:08.275393 kubelet[1938]: E0813 00:55:08.273214 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:08.278464 env[1190]: time="2025-08-13T00:55:08.277640427Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:55:08.896989 systemd[1]: Removed slice kubepods-burstable-pod1688b4f8_63ad_4d8e_82d3_a6c2a3c8e036.slice. Aug 13 00:55:08.958562 systemd[1]: Created slice kubepods-burstable-pod91ab3916_4482_404b_b1a5_bd4bb11efae4.slice. Aug 13 00:55:09.054032 kubelet[1938]: I0813 00:55:09.053942 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-cgroup\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.054699 kubelet[1938]: I0813 00:55:09.054667 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-xtables-lock\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.054833 kubelet[1938]: I0813 00:55:09.054811 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-config-path\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.054987 kubelet[1938]: I0813 00:55:09.054933 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-hubble-tls\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055271 kubelet[1938]: I0813 00:55:09.055242 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-kernel\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055419 kubelet[1938]: I0813 00:55:09.055399 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-etc-cni-netd\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055539 kubelet[1938]: I0813 00:55:09.055521 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab3916-4482-404b-b1a5-bd4bb11efae4-clustermesh-secrets\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055625 kubelet[1938]: I0813 00:55:09.055611 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-hostproc\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055737 kubelet[1938]: I0813 00:55:09.055720 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5hb\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-kube-api-access-8p5hb\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055827 kubelet[1938]: I0813 00:55:09.055810 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-bpf-maps\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.055912 kubelet[1938]: I0813 00:55:09.055896 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-lib-modules\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.056147 kubelet[1938]: I0813 00:55:09.056047 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-run\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.056147 kubelet[1938]: I0813 00:55:09.056128 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cni-path\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.056320 kubelet[1938]: I0813 00:55:09.056161 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-net\") pod \"cilium-fp2tf\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " pod="kube-system/cilium-fp2tf" Aug 13 00:55:09.262551 kubelet[1938]: E0813 00:55:09.262495 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:09.265836 env[1190]: time="2025-08-13T00:55:09.264882762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fp2tf,Uid:91ab3916-4482-404b-b1a5-bd4bb11efae4,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:09.305331 env[1190]: time="2025-08-13T00:55:09.305199880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:09.305526 env[1190]: time="2025-08-13T00:55:09.305336710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:09.305526 env[1190]: time="2025-08-13T00:55:09.305377593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:09.306299 env[1190]: time="2025-08-13T00:55:09.306224281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853 pid=2258 runtime=io.containerd.runc.v2 Aug 13 00:55:09.355650 systemd[1]: Started cri-containerd-f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853.scope. Aug 13 00:55:09.409433 env[1190]: time="2025-08-13T00:55:09.409365219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fp2tf,Uid:91ab3916-4482-404b-b1a5-bd4bb11efae4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\"" Aug 13 00:55:09.410751 kubelet[1938]: E0813 00:55:09.410715 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:09.728921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684114618.mount: Deactivated successfully. Aug 13 00:55:09.841745 kubelet[1938]: I0813 00:55:09.841690 1938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036" path="/var/lib/kubelet/pods/1688b4f8-63ad-4d8e-82d3-a6c2a3c8e036/volumes" Aug 13 00:55:10.841262 env[1190]: time="2025-08-13T00:55:10.841191933Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:10.844290 env[1190]: time="2025-08-13T00:55:10.844211669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:10.847154 env[1190]: time="2025-08-13T00:55:10.847075471Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:10.847936 env[1190]: time="2025-08-13T00:55:10.847886189Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:55:10.857590 env[1190]: time="2025-08-13T00:55:10.857059183Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:55:10.860705 env[1190]: time="2025-08-13T00:55:10.860644180Z" level=info msg="CreateContainer within sandbox \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:55:10.878809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352942786.mount: Deactivated successfully. Aug 13 00:55:10.889111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848666921.mount: Deactivated successfully. Aug 13 00:55:10.891463 env[1190]: time="2025-08-13T00:55:10.891351621Z" level=info msg="CreateContainer within sandbox \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\"" Aug 13 00:55:10.892747 env[1190]: time="2025-08-13T00:55:10.892580135Z" level=info msg="StartContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\"" Aug 13 00:55:10.938378 systemd[1]: Started cri-containerd-79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c.scope. Aug 13 00:55:11.036765 env[1190]: time="2025-08-13T00:55:11.036678055Z" level=info msg="StartContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" returns successfully" Aug 13 00:55:11.581352 kubelet[1938]: E0813 00:55:11.581308 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:11.908802 kubelet[1938]: E0813 00:55:11.908676 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:11.910171 kubelet[1938]: E0813 00:55:11.909695 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:12.973171 kubelet[1938]: E0813 00:55:12.972615 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:14.698566 kubelet[1938]: E0813 00:55:14.698511 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:14.718326 kubelet[1938]: I0813 00:55:14.718208 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rch2n" podStartSLOduration=6.139444957 podStartE2EDuration="8.718181259s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="2025-08-13 00:55:08.276132662 +0000 UTC m=+4.723383154" lastFinishedPulling="2025-08-13 00:55:10.854868962 +0000 UTC m=+7.302119456" observedRunningTime="2025-08-13 00:55:12.235355868 +0000 UTC m=+8.682606384" watchObservedRunningTime="2025-08-13 00:55:14.718181259 +0000 UTC m=+11.165431776" Aug 13 00:55:14.970893 kubelet[1938]: E0813 00:55:14.970715 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:17.740875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877190854.mount: Deactivated successfully. Aug 13 00:55:22.283468 env[1190]: time="2025-08-13T00:55:22.283361878Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:22.285918 env[1190]: time="2025-08-13T00:55:22.285859108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:22.288407 env[1190]: time="2025-08-13T00:55:22.288349715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 00:55:22.291157 env[1190]: time="2025-08-13T00:55:22.290953122Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:55:22.295701 env[1190]: time="2025-08-13T00:55:22.295642168Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:55:22.310298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451088361.mount: Deactivated successfully. Aug 13 00:55:22.321550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265023485.mount: Deactivated successfully. Aug 13 00:55:22.328380 env[1190]: time="2025-08-13T00:55:22.328260339Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\"" Aug 13 00:55:22.329913 env[1190]: time="2025-08-13T00:55:22.329868863Z" level=info msg="StartContainer for \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\"" Aug 13 00:55:22.379526 systemd[1]: Started cri-containerd-6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d.scope. Aug 13 00:55:22.433872 env[1190]: time="2025-08-13T00:55:22.433780825Z" level=info msg="StartContainer for \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\" returns successfully" Aug 13 00:55:22.452211 systemd[1]: cri-containerd-6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d.scope: Deactivated successfully. Aug 13 00:55:22.487124 env[1190]: time="2025-08-13T00:55:22.486996982Z" level=info msg="shim disconnected" id=6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d Aug 13 00:55:22.487512 env[1190]: time="2025-08-13T00:55:22.487474809Z" level=warning msg="cleaning up after shim disconnected" id=6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d namespace=k8s.io Aug 13 00:55:22.487636 env[1190]: time="2025-08-13T00:55:22.487609947Z" level=info msg="cleaning up dead shim" Aug 13 00:55:22.503142 env[1190]: time="2025-08-13T00:55:22.502977407Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2380 runtime=io.containerd.runc.v2\n" Aug 13 00:55:22.992449 kubelet[1938]: E0813 00:55:22.992398 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:22.997510 env[1190]: time="2025-08-13T00:55:22.997426553Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:55:23.026616 env[1190]: time="2025-08-13T00:55:23.026531842Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\"" Aug 13 00:55:23.029516 env[1190]: time="2025-08-13T00:55:23.029465281Z" level=info msg="StartContainer for \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\"" Aug 13 00:55:23.051452 systemd[1]: Started cri-containerd-9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7.scope. Aug 13 00:55:23.115797 env[1190]: time="2025-08-13T00:55:23.115748620Z" level=info msg="StartContainer for \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\" returns successfully" Aug 13 00:55:23.122904 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:55:23.123275 systemd[1]: Stopped systemd-sysctl.service. Aug 13 00:55:23.123584 systemd[1]: Stopping systemd-sysctl.service... Aug 13 00:55:23.125824 systemd[1]: Starting systemd-sysctl.service... Aug 13 00:55:23.133054 systemd[1]: cri-containerd-9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7.scope: Deactivated successfully. Aug 13 00:55:23.144928 systemd[1]: Finished systemd-sysctl.service. Aug 13 00:55:23.170720 env[1190]: time="2025-08-13T00:55:23.170664034Z" level=info msg="shim disconnected" id=9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7 Aug 13 00:55:23.171259 env[1190]: time="2025-08-13T00:55:23.171211377Z" level=warning msg="cleaning up after shim disconnected" id=9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7 namespace=k8s.io Aug 13 00:55:23.171394 env[1190]: time="2025-08-13T00:55:23.171375705Z" level=info msg="cleaning up dead shim" Aug 13 00:55:23.184527 env[1190]: time="2025-08-13T00:55:23.184403258Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\n" Aug 13 00:55:23.308642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d-rootfs.mount: Deactivated successfully. Aug 13 00:55:23.997115 kubelet[1938]: E0813 00:55:23.997057 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:24.003504 env[1190]: time="2025-08-13T00:55:24.003407123Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:55:24.039690 env[1190]: time="2025-08-13T00:55:24.039545944Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\"" Aug 13 00:55:24.040889 env[1190]: time="2025-08-13T00:55:24.040835869Z" level=info msg="StartContainer for \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\"" Aug 13 00:55:24.079357 systemd[1]: Started cri-containerd-ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1.scope. Aug 13 00:55:24.133787 env[1190]: time="2025-08-13T00:55:24.133695888Z" level=info msg="StartContainer for \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\" returns successfully" Aug 13 00:55:24.136511 systemd[1]: cri-containerd-ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1.scope: Deactivated successfully. Aug 13 00:55:24.175521 env[1190]: time="2025-08-13T00:55:24.175456406Z" level=info msg="shim disconnected" id=ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1 Aug 13 00:55:24.175952 env[1190]: time="2025-08-13T00:55:24.175909691Z" level=warning msg="cleaning up after shim disconnected" id=ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1 namespace=k8s.io Aug 13 00:55:24.176102 env[1190]: time="2025-08-13T00:55:24.176080633Z" level=info msg="cleaning up dead shim" Aug 13 00:55:24.190910 env[1190]: time="2025-08-13T00:55:24.190838539Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n" Aug 13 00:55:24.307860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1-rootfs.mount: Deactivated successfully. Aug 13 00:55:25.002156 kubelet[1938]: E0813 00:55:25.002083 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:25.007462 env[1190]: time="2025-08-13T00:55:25.005843661Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:55:25.036189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722869289.mount: Deactivated successfully. Aug 13 00:55:25.047926 env[1190]: time="2025-08-13T00:55:25.047841521Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\"" Aug 13 00:55:25.049416 env[1190]: time="2025-08-13T00:55:25.049352629Z" level=info msg="StartContainer for \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\"" Aug 13 00:55:25.081727 systemd[1]: Started cri-containerd-b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550.scope. Aug 13 00:55:25.136190 systemd[1]: cri-containerd-b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550.scope: Deactivated successfully. Aug 13 00:55:25.138768 env[1190]: time="2025-08-13T00:55:25.138640654Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91ab3916_4482_404b_b1a5_bd4bb11efae4.slice/cri-containerd-b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550.scope/memory.events\": no such file or directory" Aug 13 00:55:25.145897 env[1190]: time="2025-08-13T00:55:25.145808463Z" level=info msg="StartContainer for \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\" returns successfully" Aug 13 00:55:25.180602 env[1190]: time="2025-08-13T00:55:25.180543730Z" level=info msg="shim disconnected" id=b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550 Aug 13 00:55:25.181146 env[1190]: time="2025-08-13T00:55:25.181096189Z" level=warning msg="cleaning up after shim disconnected" id=b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550 namespace=k8s.io Aug 13 00:55:25.181294 env[1190]: time="2025-08-13T00:55:25.181273840Z" level=info msg="cleaning up dead shim" Aug 13 00:55:25.194869 env[1190]: time="2025-08-13T00:55:25.194779385Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2564 runtime=io.containerd.runc.v2\n" Aug 13 00:55:25.307509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550-rootfs.mount: Deactivated successfully. Aug 13 00:55:26.014373 kubelet[1938]: E0813 00:55:26.014325 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:26.027541 env[1190]: time="2025-08-13T00:55:26.024591521Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:55:26.048321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3319528876.mount: Deactivated successfully. Aug 13 00:55:26.061373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321324320.mount: Deactivated successfully. Aug 13 00:55:26.063545 env[1190]: time="2025-08-13T00:55:26.063416496Z" level=info msg="CreateContainer within sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\"" Aug 13 00:55:26.069416 env[1190]: time="2025-08-13T00:55:26.065996747Z" level=info msg="StartContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\"" Aug 13 00:55:26.091655 systemd[1]: Started cri-containerd-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3.scope. Aug 13 00:55:26.154750 env[1190]: time="2025-08-13T00:55:26.154673708Z" level=info msg="StartContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" returns successfully" Aug 13 00:55:26.319537 kubelet[1938]: I0813 00:55:26.319416 1938 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:55:26.438363 systemd[1]: Created slice kubepods-burstable-pod62651de6_c109_4cd4_a5bd_2787c25d871d.slice. Aug 13 00:55:26.448508 systemd[1]: Created slice kubepods-burstable-pod714bbe55_79e1_4e50_9250_1e798a68a8fc.slice. Aug 13 00:55:26.538116 kubelet[1938]: I0813 00:55:26.537933 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2z59\" (UniqueName: \"kubernetes.io/projected/714bbe55-79e1-4e50-9250-1e798a68a8fc-kube-api-access-f2z59\") pod \"coredns-668d6bf9bc-7t6zb\" (UID: \"714bbe55-79e1-4e50-9250-1e798a68a8fc\") " pod="kube-system/coredns-668d6bf9bc-7t6zb" Aug 13 00:55:26.538325 kubelet[1938]: I0813 00:55:26.538167 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62651de6-c109-4cd4-a5bd-2787c25d871d-config-volume\") pod \"coredns-668d6bf9bc-v8mfq\" (UID: \"62651de6-c109-4cd4-a5bd-2787c25d871d\") " pod="kube-system/coredns-668d6bf9bc-v8mfq" Aug 13 00:55:26.538325 kubelet[1938]: I0813 00:55:26.538241 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76qb\" (UniqueName: \"kubernetes.io/projected/62651de6-c109-4cd4-a5bd-2787c25d871d-kube-api-access-x76qb\") pod \"coredns-668d6bf9bc-v8mfq\" (UID: \"62651de6-c109-4cd4-a5bd-2787c25d871d\") " pod="kube-system/coredns-668d6bf9bc-v8mfq" Aug 13 00:55:26.538325 kubelet[1938]: I0813 00:55:26.538281 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/714bbe55-79e1-4e50-9250-1e798a68a8fc-config-volume\") pod \"coredns-668d6bf9bc-7t6zb\" (UID: \"714bbe55-79e1-4e50-9250-1e798a68a8fc\") " pod="kube-system/coredns-668d6bf9bc-7t6zb" Aug 13 00:55:26.751994 kubelet[1938]: E0813 00:55:26.750241 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:26.752207 env[1190]: time="2025-08-13T00:55:26.751621725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8mfq,Uid:62651de6-c109-4cd4-a5bd-2787c25d871d,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:26.753890 kubelet[1938]: E0813 00:55:26.753854 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:26.755165 env[1190]: time="2025-08-13T00:55:26.754721864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7t6zb,Uid:714bbe55-79e1-4e50-9250-1e798a68a8fc,Namespace:kube-system,Attempt:0,}" Aug 13 00:55:27.022512 kubelet[1938]: E0813 00:55:27.022378 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:27.060882 kubelet[1938]: I0813 00:55:27.058024 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fp2tf" podStartSLOduration=6.177806167 podStartE2EDuration="19.057989444s" podCreationTimestamp="2025-08-13 00:55:08 +0000 UTC" firstStartedPulling="2025-08-13 00:55:09.412239681 +0000 UTC m=+5.859490171" lastFinishedPulling="2025-08-13 00:55:22.292422968 +0000 UTC m=+18.739673448" observedRunningTime="2025-08-13 00:55:27.057323131 +0000 UTC m=+23.504573645" watchObservedRunningTime="2025-08-13 00:55:27.057989444 +0000 UTC m=+23.505239938" Aug 13 00:55:28.024912 kubelet[1938]: E0813 00:55:28.024873 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:28.608045 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Aug 13 00:55:28.608930 systemd-networkd[1008]: cilium_host: Link UP Aug 13 00:55:28.609152 systemd-networkd[1008]: cilium_net: Link UP Aug 13 00:55:28.609159 systemd-networkd[1008]: cilium_net: Gained carrier Aug 13 00:55:28.609405 systemd-networkd[1008]: cilium_host: Gained carrier Aug 13 00:55:28.627422 systemd-networkd[1008]: cilium_net: Gained IPv6LL Aug 13 00:55:28.855877 systemd[1]: run-containerd-runc-k8s.io-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3-runc.q8RNvn.mount: Deactivated successfully. Aug 13 00:55:28.908548 systemd-networkd[1008]: cilium_vxlan: Link UP Aug 13 00:55:28.908558 systemd-networkd[1008]: cilium_vxlan: Gained carrier Aug 13 00:55:29.027989 kubelet[1938]: E0813 00:55:29.027621 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:29.029095 systemd-networkd[1008]: cilium_host: Gained IPv6LL Aug 13 00:55:29.375344 kernel: NET: Registered PF_ALG protocol family Aug 13 00:55:30.421155 systemd-networkd[1008]: lxc_health: Link UP Aug 13 00:55:30.426420 systemd-networkd[1008]: lxc_health: Gained carrier Aug 13 00:55:30.429009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:55:30.659641 systemd-networkd[1008]: cilium_vxlan: Gained IPv6LL Aug 13 00:55:30.825791 systemd-networkd[1008]: lxcf5cf8dbb1342: Link UP Aug 13 00:55:30.833047 kernel: eth0: renamed from tmp96f70 Aug 13 00:55:30.842226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf5cf8dbb1342: link becomes ready Aug 13 00:55:30.840157 systemd-networkd[1008]: lxcf5cf8dbb1342: Gained carrier Aug 13 00:55:30.859315 systemd-networkd[1008]: lxc726c46a2d2c9: Link UP Aug 13 00:55:30.872546 kernel: eth0: renamed from tmp74562 Aug 13 00:55:30.881691 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc726c46a2d2c9: link becomes ready Aug 13 00:55:30.881409 systemd-networkd[1008]: lxc726c46a2d2c9: Gained carrier Aug 13 00:55:31.125192 systemd[1]: run-containerd-runc-k8s.io-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3-runc.VtZe9j.mount: Deactivated successfully. Aug 13 00:55:31.265353 kubelet[1938]: E0813 00:55:31.265002 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:32.043314 kubelet[1938]: E0813 00:55:32.043264 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:32.116353 systemd-networkd[1008]: lxcf5cf8dbb1342: Gained IPv6LL Aug 13 00:55:32.116798 systemd-networkd[1008]: lxc726c46a2d2c9: Gained IPv6LL Aug 13 00:55:32.181073 systemd-networkd[1008]: lxc_health: Gained IPv6LL Aug 13 00:55:33.046038 kubelet[1938]: E0813 00:55:33.045998 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:33.343540 systemd[1]: run-containerd-runc-k8s.io-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3-runc.Ag7308.mount: Deactivated successfully. Aug 13 00:55:35.561144 systemd[1]: run-containerd-runc-k8s.io-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3-runc.UwxjpY.mount: Deactivated successfully. Aug 13 00:55:36.648824 env[1190]: time="2025-08-13T00:55:36.648737314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:36.649567 env[1190]: time="2025-08-13T00:55:36.649503477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:36.649755 env[1190]: time="2025-08-13T00:55:36.649704459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:36.651355 env[1190]: time="2025-08-13T00:55:36.651287826Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96f706dddb59dd50165e3784d28ca0d36e1274c8a34fb332089a59c755878eca pid=3208 runtime=io.containerd.runc.v2 Aug 13 00:55:36.676344 systemd[1]: Started cri-containerd-96f706dddb59dd50165e3784d28ca0d36e1274c8a34fb332089a59c755878eca.scope. Aug 13 00:55:36.762802 env[1190]: time="2025-08-13T00:55:36.762708552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:55:36.763093 env[1190]: time="2025-08-13T00:55:36.763044270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:55:36.763260 env[1190]: time="2025-08-13T00:55:36.763222918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:55:36.763572 env[1190]: time="2025-08-13T00:55:36.763539360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf pid=3240 runtime=io.containerd.runc.v2 Aug 13 00:55:36.802776 systemd[1]: Started cri-containerd-74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf.scope. Aug 13 00:55:36.913740 env[1190]: time="2025-08-13T00:55:36.913193238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v8mfq,Uid:62651de6-c109-4cd4-a5bd-2787c25d871d,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f706dddb59dd50165e3784d28ca0d36e1274c8a34fb332089a59c755878eca\"" Aug 13 00:55:36.915915 kubelet[1938]: E0813 00:55:36.915577 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:36.919398 env[1190]: time="2025-08-13T00:55:36.919343070Z" level=info msg="CreateContainer within sandbox \"96f706dddb59dd50165e3784d28ca0d36e1274c8a34fb332089a59c755878eca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:36.951036 env[1190]: time="2025-08-13T00:55:36.950980995Z" level=info msg="CreateContainer within sandbox \"96f706dddb59dd50165e3784d28ca0d36e1274c8a34fb332089a59c755878eca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ca8a759fd47cd23ccb7695be83bf5ec548b91186c2ab7adede6175144c29802\"" Aug 13 00:55:36.952223 env[1190]: time="2025-08-13T00:55:36.952173178Z" level=info msg="StartContainer for \"7ca8a759fd47cd23ccb7695be83bf5ec548b91186c2ab7adede6175144c29802\"" Aug 13 00:55:36.983008 sudo[1309]: pam_unix(sudo:session): session closed for user root Aug 13 00:55:37.002235 sshd[1305]: pam_unix(sshd:session): session closed for user core Aug 13 00:55:37.022053 systemd[1]: sshd@6-143.198.229.35:22-139.178.68.195:54876.service: Deactivated successfully. Aug 13 00:55:37.023322 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:55:37.023552 systemd[1]: session-7.scope: Consumed 8.199s CPU time. Aug 13 00:55:37.026565 systemd-logind[1182]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:55:37.029971 systemd-logind[1182]: Removed session 7. Aug 13 00:55:37.055987 systemd[1]: Started cri-containerd-7ca8a759fd47cd23ccb7695be83bf5ec548b91186c2ab7adede6175144c29802.scope. Aug 13 00:55:37.146275 env[1190]: time="2025-08-13T00:55:37.146220049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7t6zb,Uid:714bbe55-79e1-4e50-9250-1e798a68a8fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf\"" Aug 13 00:55:37.151521 kubelet[1938]: E0813 00:55:37.147600 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:37.151745 env[1190]: time="2025-08-13T00:55:37.150551174Z" level=info msg="CreateContainer within sandbox \"74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:55:37.170998 env[1190]: time="2025-08-13T00:55:37.170850500Z" level=info msg="StartContainer for \"7ca8a759fd47cd23ccb7695be83bf5ec548b91186c2ab7adede6175144c29802\" returns successfully" Aug 13 00:55:37.180757 env[1190]: time="2025-08-13T00:55:37.180666466Z" level=info msg="CreateContainer within sandbox \"74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3f974233103d93337675d7b587289610e88fbf83f141eb3fc4fdc10003ce260\"" Aug 13 00:55:37.182057 env[1190]: time="2025-08-13T00:55:37.181987377Z" level=info msg="StartContainer for \"f3f974233103d93337675d7b587289610e88fbf83f141eb3fc4fdc10003ce260\"" Aug 13 00:55:37.216976 systemd[1]: Started cri-containerd-f3f974233103d93337675d7b587289610e88fbf83f141eb3fc4fdc10003ce260.scope. Aug 13 00:55:37.266409 env[1190]: time="2025-08-13T00:55:37.266322770Z" level=info msg="StartContainer for \"f3f974233103d93337675d7b587289610e88fbf83f141eb3fc4fdc10003ce260\" returns successfully" Aug 13 00:55:37.660274 systemd[1]: run-containerd-runc-k8s.io-74562b6e3e413449fece27e4cdd763f77ab9de002ad62d230d705086fbcde4cf-runc.KSA5xV.mount: Deactivated successfully. Aug 13 00:55:38.069021 kubelet[1938]: E0813 00:55:38.068758 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:38.071850 kubelet[1938]: E0813 00:55:38.071812 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:38.091028 kubelet[1938]: I0813 00:55:38.090899 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v8mfq" podStartSLOduration=32.090873337 podStartE2EDuration="32.090873337s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:38.087428014 +0000 UTC m=+34.534678515" watchObservedRunningTime="2025-08-13 00:55:38.090873337 +0000 UTC m=+34.538123837" Aug 13 00:55:38.136618 kubelet[1938]: I0813 00:55:38.136497 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7t6zb" podStartSLOduration=32.136469492 podStartE2EDuration="32.136469492s" podCreationTimestamp="2025-08-13 00:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:55:38.131853326 +0000 UTC m=+34.579103837" watchObservedRunningTime="2025-08-13 00:55:38.136469492 +0000 UTC m=+34.583720027" Aug 13 00:55:39.074390 kubelet[1938]: E0813 00:55:39.074310 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:39.075651 kubelet[1938]: E0813 00:55:39.075610 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:40.076971 kubelet[1938]: E0813 00:55:40.076892 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:40.080566 kubelet[1938]: E0813 00:55:40.080525 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:55:54.875778 systemd[1]: Started sshd@7-143.198.229.35:22-116.113.254.26:53950.service. Aug 13 00:55:57.766364 sshd[3375]: Invalid user config from 116.113.254.26 port 53950 Aug 13 00:55:57.776417 sshd[3375]: pam_faillock(sshd:auth): User unknown Aug 13 00:55:57.777286 sshd[3375]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:55:57.777346 sshd[3375]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=116.113.254.26 Aug 13 00:55:57.778166 sshd[3375]: pam_faillock(sshd:auth): User unknown Aug 13 00:55:59.651384 sshd[3375]: Failed password for invalid user config from 116.113.254.26 port 53950 ssh2 Aug 13 00:56:00.681990 sshd[3377]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:00.688144 sshd[3375]: Postponed keyboard-interactive for invalid user config from 116.113.254.26 port 53950 ssh2 [preauth] Aug 13 00:56:01.292466 sshd[3377]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:56:01.293643 sshd[3377]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:03.576688 sshd[3375]: PAM: Permission denied for illegal user config from 116.113.254.26 Aug 13 00:56:03.577680 sshd[3375]: Failed keyboard-interactive/pam for invalid user config from 116.113.254.26 port 53950 ssh2 Aug 13 00:56:04.375077 sshd[3375]: Connection closed by invalid user config 116.113.254.26 port 53950 [preauth] Aug 13 00:56:04.376941 systemd[1]: sshd@7-143.198.229.35:22-116.113.254.26:53950.service: Deactivated successfully. Aug 13 00:56:10.838813 kubelet[1938]: E0813 00:56:10.838741 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:14.466011 systemd[1]: Started sshd@8-143.198.229.35:22-139.178.68.195:53606.service. Aug 13 00:56:14.520119 sshd[3386]: Accepted publickey for core from 139.178.68.195 port 53606 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:14.523007 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:14.531064 systemd-logind[1182]: New session 8 of user core. Aug 13 00:56:14.532591 systemd[1]: Started session-8.scope. Aug 13 00:56:14.829048 sshd[3386]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:14.833753 systemd-logind[1182]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:56:14.834091 systemd[1]: sshd@8-143.198.229.35:22-139.178.68.195:53606.service: Deactivated successfully. Aug 13 00:56:14.835163 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:56:14.836628 systemd-logind[1182]: Removed session 8. Aug 13 00:56:19.837011 systemd[1]: Started sshd@9-143.198.229.35:22-139.178.68.195:53614.service. Aug 13 00:56:19.889163 sshd[3399]: Accepted publickey for core from 139.178.68.195 port 53614 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:19.891347 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:19.900062 systemd-logind[1182]: New session 9 of user core. Aug 13 00:56:19.900358 systemd[1]: Started session-9.scope. Aug 13 00:56:20.048605 sshd[3399]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:20.052471 systemd-logind[1182]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:56:20.052758 systemd[1]: sshd@9-143.198.229.35:22-139.178.68.195:53614.service: Deactivated successfully. Aug 13 00:56:20.053571 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:56:20.055166 systemd-logind[1182]: Removed session 9. Aug 13 00:56:20.838343 kubelet[1938]: E0813 00:56:20.838283 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:25.062226 systemd[1]: Started sshd@10-143.198.229.35:22-139.178.68.195:43262.service. Aug 13 00:56:25.119057 sshd[3412]: Accepted publickey for core from 139.178.68.195 port 43262 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:25.121737 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:25.129061 systemd-logind[1182]: New session 10 of user core. Aug 13 00:56:25.129747 systemd[1]: Started session-10.scope. Aug 13 00:56:25.302543 sshd[3412]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:25.306476 systemd[1]: sshd@10-143.198.229.35:22-139.178.68.195:43262.service: Deactivated successfully. Aug 13 00:56:25.307630 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:56:25.308716 systemd-logind[1182]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:56:25.310530 systemd-logind[1182]: Removed session 10. Aug 13 00:56:30.309453 systemd[1]: Started sshd@11-143.198.229.35:22-139.178.68.195:40696.service. Aug 13 00:56:30.368784 sshd[3425]: Accepted publickey for core from 139.178.68.195 port 40696 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:30.373212 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:30.386480 systemd[1]: Started session-11.scope. Aug 13 00:56:30.387307 systemd-logind[1182]: New session 11 of user core. Aug 13 00:56:30.540087 sshd[3425]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:30.546611 systemd[1]: Started sshd@12-143.198.229.35:22-139.178.68.195:40710.service. Aug 13 00:56:30.548766 systemd[1]: sshd@11-143.198.229.35:22-139.178.68.195:40696.service: Deactivated successfully. Aug 13 00:56:30.550462 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:56:30.551868 systemd-logind[1182]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:56:30.553295 systemd-logind[1182]: Removed session 11. Aug 13 00:56:30.600264 sshd[3437]: Accepted publickey for core from 139.178.68.195 port 40710 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:30.602998 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:30.613298 systemd[1]: Started session-12.scope. Aug 13 00:56:30.613893 systemd-logind[1182]: New session 12 of user core. Aug 13 00:56:30.828286 sshd[3437]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:30.830339 systemd[1]: Started sshd@13-143.198.229.35:22-139.178.68.195:40714.service. Aug 13 00:56:30.835585 systemd[1]: sshd@12-143.198.229.35:22-139.178.68.195:40710.service: Deactivated successfully. Aug 13 00:56:30.836464 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:56:30.837290 systemd-logind[1182]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:56:30.837913 kubelet[1938]: E0813 00:56:30.837865 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:30.840092 systemd-logind[1182]: Removed session 12. Aug 13 00:56:30.893734 sshd[3446]: Accepted publickey for core from 139.178.68.195 port 40714 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:30.896413 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:30.902901 systemd-logind[1182]: New session 13 of user core. Aug 13 00:56:30.903411 systemd[1]: Started session-13.scope. Aug 13 00:56:31.059367 sshd[3446]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:31.064155 systemd[1]: sshd@13-143.198.229.35:22-139.178.68.195:40714.service: Deactivated successfully. Aug 13 00:56:31.065277 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:56:31.067313 systemd-logind[1182]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:56:31.069199 systemd-logind[1182]: Removed session 13. Aug 13 00:56:33.964974 systemd[1]: Started sshd@14-143.198.229.35:22-183.171.215.115:57726.service. Aug 13 00:56:35.838282 kubelet[1938]: E0813 00:56:35.838229 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:36.072296 systemd[1]: Started sshd@15-143.198.229.35:22-139.178.68.195:40728.service. Aug 13 00:56:36.124804 sshd[3462]: Accepted publickey for core from 139.178.68.195 port 40728 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:36.128851 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:36.136238 systemd[1]: Started session-14.scope. Aug 13 00:56:36.137147 systemd-logind[1182]: New session 14 of user core. Aug 13 00:56:36.284663 sshd[3462]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:36.289085 systemd[1]: sshd@15-143.198.229.35:22-139.178.68.195:40728.service: Deactivated successfully. Aug 13 00:56:36.290181 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:56:36.292175 systemd-logind[1182]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:56:36.294273 systemd-logind[1182]: Removed session 14. Aug 13 00:56:36.838704 kubelet[1938]: E0813 00:56:36.838635 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:37.839922 kubelet[1938]: E0813 00:56:37.839856 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:39.356577 systemd[1]: Started sshd@16-143.198.229.35:22-61.233.4.50:41534.service. Aug 13 00:56:41.294634 systemd[1]: Started sshd@17-143.198.229.35:22-139.178.68.195:39646.service. Aug 13 00:56:41.348222 sshd[3480]: Accepted publickey for core from 139.178.68.195 port 39646 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:41.351249 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:41.358258 systemd-logind[1182]: New session 15 of user core. Aug 13 00:56:41.358949 systemd[1]: Started session-15.scope. Aug 13 00:56:41.521080 sshd[3480]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:41.525645 systemd-logind[1182]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:56:41.526108 systemd[1]: sshd@17-143.198.229.35:22-139.178.68.195:39646.service: Deactivated successfully. Aug 13 00:56:41.527360 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:56:41.529704 systemd-logind[1182]: Removed session 15. Aug 13 00:56:42.516700 sshd[3477]: Invalid user alain from 61.233.4.50 port 41534 Aug 13 00:56:42.521109 sshd[3477]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:42.521876 sshd[3477]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:56:42.521940 sshd[3477]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.233.4.50 Aug 13 00:56:42.522680 sshd[3477]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:44.240217 sshd[3477]: Failed password for invalid user alain from 61.233.4.50 port 41534 ssh2 Aug 13 00:56:44.898537 sshd[3491]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:44.904199 sshd[3477]: Postponed keyboard-interactive for invalid user alain from 61.233.4.50 port 41534 ssh2 [preauth] Aug 13 00:56:45.525348 sshd[3491]: pam_unix(sshd:auth): check pass; user unknown Aug 13 00:56:45.526647 sshd[3491]: pam_faillock(sshd:auth): User unknown Aug 13 00:56:46.533273 systemd[1]: Started sshd@18-143.198.229.35:22-139.178.68.195:39662.service. Aug 13 00:56:46.592325 sshd[3493]: Accepted publickey for core from 139.178.68.195 port 39662 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:46.595349 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:46.602189 systemd[1]: Started session-16.scope. Aug 13 00:56:46.602993 systemd-logind[1182]: New session 16 of user core. Aug 13 00:56:46.767280 systemd[1]: Started sshd@19-143.198.229.35:22-139.178.68.195:39672.service. Aug 13 00:56:46.794107 sshd[3493]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:46.797836 systemd[1]: sshd@18-143.198.229.35:22-139.178.68.195:39662.service: Deactivated successfully. Aug 13 00:56:46.798900 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:56:46.799856 systemd-logind[1182]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:56:46.801286 systemd-logind[1182]: Removed session 16. Aug 13 00:56:46.817793 sshd[3504]: Accepted publickey for core from 139.178.68.195 port 39672 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:46.820023 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:46.827060 systemd-logind[1182]: New session 17 of user core. Aug 13 00:56:46.828627 systemd[1]: Started session-17.scope. Aug 13 00:56:47.183866 sshd[3477]: PAM: Permission denied for illegal user alain from 61.233.4.50 Aug 13 00:56:47.184765 sshd[3477]: Failed keyboard-interactive/pam for invalid user alain from 61.233.4.50 port 41534 ssh2 Aug 13 00:56:47.304527 sshd[3504]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:47.310141 systemd[1]: sshd@19-143.198.229.35:22-139.178.68.195:39672.service: Deactivated successfully. Aug 13 00:56:47.310862 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:56:47.313104 systemd-logind[1182]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:56:47.313734 systemd[1]: Started sshd@20-143.198.229.35:22-139.178.68.195:39676.service. Aug 13 00:56:47.315920 systemd-logind[1182]: Removed session 17. Aug 13 00:56:47.378136 sshd[3515]: Accepted publickey for core from 139.178.68.195 port 39676 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:47.381150 sshd[3515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:47.388019 systemd-logind[1182]: New session 18 of user core. Aug 13 00:56:47.388904 systemd[1]: Started session-18.scope. Aug 13 00:56:47.791668 sshd[3477]: Connection closed by invalid user alain 61.233.4.50 port 41534 [preauth] Aug 13 00:56:47.793857 systemd[1]: sshd@16-143.198.229.35:22-61.233.4.50:41534.service: Deactivated successfully. Aug 13 00:56:47.839489 kubelet[1938]: E0813 00:56:47.839438 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:56:48.240158 sshd[3515]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:48.246880 systemd[1]: sshd@20-143.198.229.35:22-139.178.68.195:39676.service: Deactivated successfully. Aug 13 00:56:48.248000 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:56:48.248756 systemd-logind[1182]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:56:48.255079 systemd[1]: Started sshd@21-143.198.229.35:22-139.178.68.195:39680.service. Aug 13 00:56:48.257631 systemd-logind[1182]: Removed session 18. Aug 13 00:56:48.305093 sshd[3532]: Accepted publickey for core from 139.178.68.195 port 39680 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:48.307759 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:48.315231 systemd[1]: Started session-19.scope. Aug 13 00:56:48.315690 systemd-logind[1182]: New session 19 of user core. Aug 13 00:56:48.722874 sshd[3532]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:48.729713 systemd[1]: sshd@21-143.198.229.35:22-139.178.68.195:39680.service: Deactivated successfully. Aug 13 00:56:48.730862 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:56:48.731833 systemd-logind[1182]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:56:48.735000 systemd[1]: Started sshd@22-143.198.229.35:22-139.178.68.195:39686.service. Aug 13 00:56:48.738125 systemd-logind[1182]: Removed session 19. Aug 13 00:56:48.792014 sshd[3542]: Accepted publickey for core from 139.178.68.195 port 39686 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:48.794180 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:48.801689 systemd[1]: Started session-20.scope. Aug 13 00:56:48.802805 systemd-logind[1182]: New session 20 of user core. Aug 13 00:56:48.964372 sshd[3542]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:48.968189 systemd-logind[1182]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:56:48.969124 systemd[1]: sshd@22-143.198.229.35:22-139.178.68.195:39686.service: Deactivated successfully. Aug 13 00:56:48.970098 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:56:48.971215 systemd-logind[1182]: Removed session 20. Aug 13 00:56:53.973518 systemd[1]: Started sshd@23-143.198.229.35:22-139.178.68.195:45482.service. Aug 13 00:56:54.036667 sshd[3554]: Accepted publickey for core from 139.178.68.195 port 45482 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:54.038866 sshd[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:54.048983 systemd[1]: Started session-21.scope. Aug 13 00:56:54.050090 systemd-logind[1182]: New session 21 of user core. Aug 13 00:56:54.223497 sshd[3554]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:54.227272 systemd-logind[1182]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:56:54.228826 systemd[1]: sshd@23-143.198.229.35:22-139.178.68.195:45482.service: Deactivated successfully. Aug 13 00:56:54.229925 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:56:54.230727 systemd-logind[1182]: Removed session 21. Aug 13 00:56:59.232537 systemd[1]: Started sshd@24-143.198.229.35:22-139.178.68.195:45498.service. Aug 13 00:56:59.283535 sshd[3568]: Accepted publickey for core from 139.178.68.195 port 45498 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:56:59.285733 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:56:59.293123 systemd[1]: Started session-22.scope. Aug 13 00:56:59.293145 systemd-logind[1182]: New session 22 of user core. Aug 13 00:56:59.448252 sshd[3568]: pam_unix(sshd:session): session closed for user core Aug 13 00:56:59.452136 systemd[1]: sshd@24-143.198.229.35:22-139.178.68.195:45498.service: Deactivated successfully. Aug 13 00:56:59.453023 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:56:59.454302 systemd-logind[1182]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:56:59.455858 systemd-logind[1182]: Removed session 22. Aug 13 00:57:03.838389 kubelet[1938]: E0813 00:57:03.838336 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:04.456233 systemd[1]: Started sshd@25-143.198.229.35:22-139.178.68.195:57928.service. Aug 13 00:57:04.508480 sshd[3582]: Accepted publickey for core from 139.178.68.195 port 57928 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:04.511018 sshd[3582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:04.519571 systemd-logind[1182]: New session 23 of user core. Aug 13 00:57:04.519700 systemd[1]: Started session-23.scope. Aug 13 00:57:04.669189 sshd[3582]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:04.674496 systemd[1]: sshd@25-143.198.229.35:22-139.178.68.195:57928.service: Deactivated successfully. Aug 13 00:57:04.675420 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:57:04.676572 systemd-logind[1182]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:57:04.678818 systemd-logind[1182]: Removed session 23. Aug 13 00:57:09.678133 systemd[1]: Started sshd@26-143.198.229.35:22-139.178.68.195:57932.service. Aug 13 00:57:09.736021 sshd[3596]: Accepted publickey for core from 139.178.68.195 port 57932 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:09.735406 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:09.743326 systemd-logind[1182]: New session 24 of user core. Aug 13 00:57:09.744380 systemd[1]: Started session-24.scope. Aug 13 00:57:09.902713 sshd[3596]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:09.907408 systemd[1]: sshd@26-143.198.229.35:22-139.178.68.195:57932.service: Deactivated successfully. Aug 13 00:57:09.908489 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:57:09.910019 systemd-logind[1182]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:57:09.911328 systemd-logind[1182]: Removed session 24. Aug 13 00:57:14.912941 systemd[1]: Started sshd@27-143.198.229.35:22-139.178.68.195:49436.service. Aug 13 00:57:14.970756 sshd[3608]: Accepted publickey for core from 139.178.68.195 port 49436 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:14.973282 sshd[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:14.980983 systemd-logind[1182]: New session 25 of user core. Aug 13 00:57:14.981405 systemd[1]: Started session-25.scope. Aug 13 00:57:15.160158 sshd[3608]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:15.163459 systemd-logind[1182]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:57:15.163936 systemd[1]: sshd@27-143.198.229.35:22-139.178.68.195:49436.service: Deactivated successfully. Aug 13 00:57:15.165046 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:57:15.166157 systemd-logind[1182]: Removed session 25. Aug 13 00:57:20.169937 systemd[1]: Started sshd@28-143.198.229.35:22-139.178.68.195:36552.service. Aug 13 00:57:20.234140 sshd[3620]: Accepted publickey for core from 139.178.68.195 port 36552 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:20.236344 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:20.243667 systemd[1]: Started session-26.scope. Aug 13 00:57:20.245179 systemd-logind[1182]: New session 26 of user core. Aug 13 00:57:20.414321 sshd[3620]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:20.422765 systemd[1]: Started sshd@29-143.198.229.35:22-139.178.68.195:36562.service. Aug 13 00:57:20.425048 systemd[1]: sshd@28-143.198.229.35:22-139.178.68.195:36552.service: Deactivated successfully. Aug 13 00:57:20.427163 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:57:20.429515 systemd-logind[1182]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:57:20.431454 systemd-logind[1182]: Removed session 26. Aug 13 00:57:20.481915 sshd[3633]: Accepted publickey for core from 139.178.68.195 port 36562 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:20.482801 sshd[3633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:20.495511 systemd[1]: Started session-27.scope. Aug 13 00:57:20.496018 systemd-logind[1182]: New session 27 of user core. Aug 13 00:57:22.777522 env[1190]: time="2025-08-13T00:57:22.776564766Z" level=info msg="StopContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" with timeout 30 (s)" Aug 13 00:57:22.781350 env[1190]: time="2025-08-13T00:57:22.781276775Z" level=info msg="Stop container \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" with signal terminated" Aug 13 00:57:22.797407 env[1190]: time="2025-08-13T00:57:22.797325689Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:57:22.805581 env[1190]: time="2025-08-13T00:57:22.805535171Z" level=info msg="StopContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" with timeout 2 (s)" Aug 13 00:57:22.806158 env[1190]: time="2025-08-13T00:57:22.806128522Z" level=info msg="Stop container \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" with signal terminated" Aug 13 00:57:22.810052 systemd[1]: cri-containerd-79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c.scope: Deactivated successfully. Aug 13 00:57:22.817715 systemd-networkd[1008]: lxc_health: Link DOWN Aug 13 00:57:22.817724 systemd-networkd[1008]: lxc_health: Lost carrier Aug 13 00:57:22.857517 systemd[1]: cri-containerd-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3.scope: Deactivated successfully. Aug 13 00:57:22.857819 systemd[1]: cri-containerd-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3.scope: Consumed 9.680s CPU time. Aug 13 00:57:22.864362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c-rootfs.mount: Deactivated successfully. Aug 13 00:57:22.884700 env[1190]: time="2025-08-13T00:57:22.884628139Z" level=info msg="shim disconnected" id=79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c Aug 13 00:57:22.885030 env[1190]: time="2025-08-13T00:57:22.884783720Z" level=warning msg="cleaning up after shim disconnected" id=79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c namespace=k8s.io Aug 13 00:57:22.885030 env[1190]: time="2025-08-13T00:57:22.884800144Z" level=info msg="cleaning up dead shim" Aug 13 00:57:22.902924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3-rootfs.mount: Deactivated successfully. Aug 13 00:57:22.909317 env[1190]: time="2025-08-13T00:57:22.909259961Z" level=info msg="shim disconnected" id=32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3 Aug 13 00:57:22.909317 env[1190]: time="2025-08-13T00:57:22.909309264Z" level=warning msg="cleaning up after shim disconnected" id=32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3 namespace=k8s.io Aug 13 00:57:22.909317 env[1190]: time="2025-08-13T00:57:22.909320267Z" level=info msg="cleaning up dead shim" Aug 13 00:57:22.910989 env[1190]: time="2025-08-13T00:57:22.910898660Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3695 runtime=io.containerd.runc.v2\n" Aug 13 00:57:22.914169 env[1190]: time="2025-08-13T00:57:22.914115444Z" level=info msg="StopContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" returns successfully" Aug 13 00:57:22.917353 env[1190]: time="2025-08-13T00:57:22.917305883Z" level=info msg="StopPodSandbox for \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\"" Aug 13 00:57:22.917614 env[1190]: time="2025-08-13T00:57:22.917589189Z" level=info msg="Container to stop \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.920087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9-shm.mount: Deactivated successfully. Aug 13 00:57:22.929692 env[1190]: time="2025-08-13T00:57:22.929628612Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3714 runtime=io.containerd.runc.v2\n" Aug 13 00:57:22.932610 env[1190]: time="2025-08-13T00:57:22.932543725Z" level=info msg="StopContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" returns successfully" Aug 13 00:57:22.933396 env[1190]: time="2025-08-13T00:57:22.933361661Z" level=info msg="StopPodSandbox for \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\"" Aug 13 00:57:22.933686 env[1190]: time="2025-08-13T00:57:22.933644838Z" level=info msg="Container to stop \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.934389 systemd[1]: cri-containerd-15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9.scope: Deactivated successfully. Aug 13 00:57:22.937119 env[1190]: time="2025-08-13T00:57:22.936931917Z" level=info msg="Container to stop \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.937427 env[1190]: time="2025-08-13T00:57:22.937396560Z" level=info msg="Container to stop \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.937701 env[1190]: time="2025-08-13T00:57:22.937667763Z" level=info msg="Container to stop \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.937843 env[1190]: time="2025-08-13T00:57:22.937815168Z" level=info msg="Container to stop \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:22.940610 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853-shm.mount: Deactivated successfully. Aug 13 00:57:22.956532 systemd[1]: cri-containerd-f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853.scope: Deactivated successfully. Aug 13 00:57:22.992469 env[1190]: time="2025-08-13T00:57:22.992395278Z" level=info msg="shim disconnected" id=15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9 Aug 13 00:57:22.992469 env[1190]: time="2025-08-13T00:57:22.992458337Z" level=warning msg="cleaning up after shim disconnected" id=15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9 namespace=k8s.io Aug 13 00:57:22.992469 env[1190]: time="2025-08-13T00:57:22.992478359Z" level=info msg="cleaning up dead shim" Aug 13 00:57:23.005778 env[1190]: time="2025-08-13T00:57:23.005718385Z" level=info msg="shim disconnected" id=f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853 Aug 13 00:57:23.006158 env[1190]: time="2025-08-13T00:57:23.006125264Z" level=warning msg="cleaning up after shim disconnected" id=f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853 namespace=k8s.io Aug 13 00:57:23.006633 env[1190]: time="2025-08-13T00:57:23.006606241Z" level=info msg="cleaning up dead shim" Aug 13 00:57:23.010557 env[1190]: time="2025-08-13T00:57:23.010497069Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3762 runtime=io.containerd.runc.v2\n" Aug 13 00:57:23.011008 env[1190]: time="2025-08-13T00:57:23.010909260Z" level=info msg="TearDown network for sandbox \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\" successfully" Aug 13 00:57:23.011101 env[1190]: time="2025-08-13T00:57:23.010954068Z" level=info msg="StopPodSandbox for \"15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9\" returns successfully" Aug 13 00:57:23.040220 env[1190]: time="2025-08-13T00:57:23.038553220Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3776 runtime=io.containerd.runc.v2\n" Aug 13 00:57:23.040220 env[1190]: time="2025-08-13T00:57:23.039194035Z" level=info msg="TearDown network for sandbox \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" successfully" Aug 13 00:57:23.040220 env[1190]: time="2025-08-13T00:57:23.039224560Z" level=info msg="StopPodSandbox for \"f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853\" returns successfully" Aug 13 00:57:23.090558 kubelet[1938]: I0813 00:57:23.090493 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-config-path\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.091154 kubelet[1938]: I0813 00:57:23.090769 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-bpf-maps\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104472 kubelet[1938]: I0813 00:57:23.104380 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqv9x\" (UniqueName: \"kubernetes.io/projected/8a43b934-62a6-4148-be81-43e0131241f4-kube-api-access-nqv9x\") pod \"8a43b934-62a6-4148-be81-43e0131241f4\" (UID: \"8a43b934-62a6-4148-be81-43e0131241f4\") " Aug 13 00:57:23.104472 kubelet[1938]: I0813 00:57:23.104466 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a43b934-62a6-4148-be81-43e0131241f4-cilium-config-path\") pod \"8a43b934-62a6-4148-be81-43e0131241f4\" (UID: \"8a43b934-62a6-4148-be81-43e0131241f4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104507 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-cgroup\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104538 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-hubble-tls\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104572 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-etc-cni-netd\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104607 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab3916-4482-404b-b1a5-bd4bb11efae4-clustermesh-secrets\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104672 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-run\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104774 kubelet[1938]: I0813 00:57:23.104730 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-kernel\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104754 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-lib-modules\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104785 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-xtables-lock\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104825 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-hostproc\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104857 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5hb\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-kube-api-access-8p5hb\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104887 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-net\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.104953 kubelet[1938]: I0813 00:57:23.104918 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cni-path\") pod \"91ab3916-4482-404b-b1a5-bd4bb11efae4\" (UID: \"91ab3916-4482-404b-b1a5-bd4bb11efae4\") " Aug 13 00:57:23.114666 kubelet[1938]: I0813 00:57:23.114550 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:57:23.118671 kubelet[1938]: I0813 00:57:23.118570 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.123255 kubelet[1938]: I0813 00:57:23.123141 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.130044 kubelet[1938]: I0813 00:57:23.129950 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137003 kubelet[1938]: I0813 00:57:23.133089 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a43b934-62a6-4148-be81-43e0131241f4-kube-api-access-nqv9x" (OuterVolumeSpecName: "kube-api-access-nqv9x") pod "8a43b934-62a6-4148-be81-43e0131241f4" (UID: "8a43b934-62a6-4148-be81-43e0131241f4"). InnerVolumeSpecName "kube-api-access-nqv9x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:57:23.137003 kubelet[1938]: I0813 00:57:23.134032 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137003 kubelet[1938]: I0813 00:57:23.134115 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137003 kubelet[1938]: I0813 00:57:23.134135 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137003 kubelet[1938]: I0813 00:57:23.134167 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-hostproc" (OuterVolumeSpecName: "hostproc") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137480 kubelet[1938]: I0813 00:57:23.107272 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cni-path" (OuterVolumeSpecName: "cni-path") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137767 kubelet[1938]: I0813 00:57:23.137726 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.137849 kubelet[1938]: I0813 00:57:23.137833 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:23.140348 kubelet[1938]: I0813 00:57:23.139132 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a43b934-62a6-4148-be81-43e0131241f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a43b934-62a6-4148-be81-43e0131241f4" (UID: "8a43b934-62a6-4148-be81-43e0131241f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:57:23.140646 kubelet[1938]: I0813 00:57:23.140614 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:57:23.142168 kubelet[1938]: I0813 00:57:23.142131 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91ab3916-4482-404b-b1a5-bd4bb11efae4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:57:23.146736 kubelet[1938]: I0813 00:57:23.146656 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-kube-api-access-8p5hb" (OuterVolumeSpecName: "kube-api-access-8p5hb") pod "91ab3916-4482-404b-b1a5-bd4bb11efae4" (UID: "91ab3916-4482-404b-b1a5-bd4bb11efae4"). InnerVolumeSpecName "kube-api-access-8p5hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208386 1938 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-bpf-maps\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208446 1938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqv9x\" (UniqueName: \"kubernetes.io/projected/8a43b934-62a6-4148-be81-43e0131241f4-kube-api-access-nqv9x\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208461 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a43b934-62a6-4148-be81-43e0131241f4-cilium-config-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208471 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-cgroup\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208482 1938 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-hubble-tls\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208491 1938 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-etc-cni-netd\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208502 1938 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91ab3916-4482-404b-b1a5-bd4bb11efae4-clustermesh-secrets\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.208495 kubelet[1938]: I0813 00:57:23.208512 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-run\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208523 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-kernel\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208538 1938 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-lib-modules\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208547 1938 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-xtables-lock\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208556 1938 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-hostproc\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208566 1938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8p5hb\" (UniqueName: \"kubernetes.io/projected/91ab3916-4482-404b-b1a5-bd4bb11efae4-kube-api-access-8p5hb\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208575 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-host-proc-sys-net\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208584 1938 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91ab3916-4482-404b-b1a5-bd4bb11efae4-cni-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.209134 kubelet[1938]: I0813 00:57:23.208593 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91ab3916-4482-404b-b1a5-bd4bb11efae4-cilium-config-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:23.405733 kubelet[1938]: I0813 00:57:23.402985 1938 scope.go:117] "RemoveContainer" containerID="79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c" Aug 13 00:57:23.408805 systemd[1]: Removed slice kubepods-besteffort-pod8a43b934_62a6_4148_be81_43e0131241f4.slice. Aug 13 00:57:23.413894 env[1190]: time="2025-08-13T00:57:23.413446320Z" level=info msg="RemoveContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\"" Aug 13 00:57:23.423683 env[1190]: time="2025-08-13T00:57:23.423625110Z" level=info msg="RemoveContainer for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" returns successfully" Aug 13 00:57:23.424305 kubelet[1938]: I0813 00:57:23.424277 1938 scope.go:117] "RemoveContainer" containerID="79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c" Aug 13 00:57:23.426613 env[1190]: time="2025-08-13T00:57:23.425813880Z" level=error msg="ContainerStatus for \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\": not found" Aug 13 00:57:23.426761 kubelet[1938]: E0813 00:57:23.426391 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\": not found" containerID="79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c" Aug 13 00:57:23.426761 kubelet[1938]: I0813 00:57:23.426450 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c"} err="failed to get container status \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\": rpc error: code = NotFound desc = an error occurred when try to find container \"79604100dcd8600ba3cc699e2d775713ca4034678d226d49b57eee971602fd3c\": not found" Aug 13 00:57:23.426761 kubelet[1938]: I0813 00:57:23.426661 1938 scope.go:117] "RemoveContainer" containerID="32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3" Aug 13 00:57:23.431087 env[1190]: time="2025-08-13T00:57:23.431033542Z" level=info msg="RemoveContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\"" Aug 13 00:57:23.437059 env[1190]: time="2025-08-13T00:57:23.436993195Z" level=info msg="RemoveContainer for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" returns successfully" Aug 13 00:57:23.437945 kubelet[1938]: I0813 00:57:23.437673 1938 scope.go:117] "RemoveContainer" containerID="b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550" Aug 13 00:57:23.438769 systemd[1]: Removed slice kubepods-burstable-pod91ab3916_4482_404b_b1a5_bd4bb11efae4.slice. Aug 13 00:57:23.438864 systemd[1]: kubepods-burstable-pod91ab3916_4482_404b_b1a5_bd4bb11efae4.slice: Consumed 9.839s CPU time. Aug 13 00:57:23.441797 env[1190]: time="2025-08-13T00:57:23.441748682Z" level=info msg="RemoveContainer for \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\"" Aug 13 00:57:23.445819 env[1190]: time="2025-08-13T00:57:23.445747721Z" level=info msg="RemoveContainer for \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\" returns successfully" Aug 13 00:57:23.446220 kubelet[1938]: I0813 00:57:23.446173 1938 scope.go:117] "RemoveContainer" containerID="ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1" Aug 13 00:57:23.449581 env[1190]: time="2025-08-13T00:57:23.449524863Z" level=info msg="RemoveContainer for \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\"" Aug 13 00:57:23.456361 env[1190]: time="2025-08-13T00:57:23.456275605Z" level=info msg="RemoveContainer for \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\" returns successfully" Aug 13 00:57:23.456806 kubelet[1938]: I0813 00:57:23.456771 1938 scope.go:117] "RemoveContainer" containerID="9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7" Aug 13 00:57:23.466109 env[1190]: time="2025-08-13T00:57:23.465306050Z" level=info msg="RemoveContainer for \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\"" Aug 13 00:57:23.472224 env[1190]: time="2025-08-13T00:57:23.471469380Z" level=info msg="RemoveContainer for \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\" returns successfully" Aug 13 00:57:23.472484 kubelet[1938]: I0813 00:57:23.472455 1938 scope.go:117] "RemoveContainer" containerID="6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d" Aug 13 00:57:23.475379 env[1190]: time="2025-08-13T00:57:23.475323669Z" level=info msg="RemoveContainer for \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\"" Aug 13 00:57:23.481481 env[1190]: time="2025-08-13T00:57:23.481406406Z" level=info msg="RemoveContainer for \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\" returns successfully" Aug 13 00:57:23.482106 kubelet[1938]: I0813 00:57:23.482083 1938 scope.go:117] "RemoveContainer" containerID="32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3" Aug 13 00:57:23.483223 env[1190]: time="2025-08-13T00:57:23.482673002Z" level=error msg="ContainerStatus for \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\": not found" Aug 13 00:57:23.483477 kubelet[1938]: E0813 00:57:23.483450 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\": not found" containerID="32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3" Aug 13 00:57:23.483603 kubelet[1938]: I0813 00:57:23.483575 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3"} err="failed to get container status \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"32da30aafb93814b548a81725a124e2939cb1b6c1cee6a83d934b4685c2543a3\": not found" Aug 13 00:57:23.483709 kubelet[1938]: I0813 00:57:23.483675 1938 scope.go:117] "RemoveContainer" containerID="b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550" Aug 13 00:57:23.485111 env[1190]: time="2025-08-13T00:57:23.484098192Z" level=error msg="ContainerStatus for \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\": not found" Aug 13 00:57:23.485111 env[1190]: time="2025-08-13T00:57:23.484805120Z" level=error msg="ContainerStatus for \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\": not found" Aug 13 00:57:23.485242 kubelet[1938]: E0813 00:57:23.484490 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\": not found" containerID="b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550" Aug 13 00:57:23.485242 kubelet[1938]: I0813 00:57:23.484517 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550"} err="failed to get container status \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\": rpc error: code = NotFound desc = an error occurred when try to find container \"b69ac6e02f455790f51c517838cb0d9cb905dfb4bceeb1fb05839f95aacd1550\": not found" Aug 13 00:57:23.485242 kubelet[1938]: I0813 00:57:23.484558 1938 scope.go:117] "RemoveContainer" containerID="ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1" Aug 13 00:57:23.485242 kubelet[1938]: E0813 00:57:23.485026 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\": not found" containerID="ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1" Aug 13 00:57:23.485242 kubelet[1938]: I0813 00:57:23.485046 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1"} err="failed to get container status \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ffa32764481f9393d075ad6b6134172f3ecec164ee1e6e1287decfd55fba67d1\": not found" Aug 13 00:57:23.485242 kubelet[1938]: I0813 00:57:23.485080 1938 scope.go:117] "RemoveContainer" containerID="9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7" Aug 13 00:57:23.485429 kubelet[1938]: E0813 00:57:23.485369 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\": not found" containerID="9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7" Aug 13 00:57:23.485459 env[1190]: time="2025-08-13T00:57:23.485231716Z" level=error msg="ContainerStatus for \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\": not found" Aug 13 00:57:23.485491 kubelet[1938]: I0813 00:57:23.485427 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7"} err="failed to get container status \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"9378e691e15a291ed1379c94e79cfc460303cc1efb29a4dd621fa453881515c7\": not found" Aug 13 00:57:23.485491 kubelet[1938]: I0813 00:57:23.485443 1938 scope.go:117] "RemoveContainer" containerID="6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d" Aug 13 00:57:23.486016 env[1190]: time="2025-08-13T00:57:23.485924194Z" level=error msg="ContainerStatus for \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\": not found" Aug 13 00:57:23.486180 kubelet[1938]: E0813 00:57:23.486159 1938 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\": not found" containerID="6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d" Aug 13 00:57:23.486277 kubelet[1938]: I0813 00:57:23.486253 1938 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d"} err="failed to get container status \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e6efd4e32dd97e862e206467234a8208ec5447637835a8f8535e3d8061cf39d\": not found" Aug 13 00:57:23.757239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4d75240f6abcfab4428336c2c50704d2c88d22906a70cebd06309fd415a7853-rootfs.mount: Deactivated successfully. Aug 13 00:57:23.757357 systemd[1]: var-lib-kubelet-pods-91ab3916\x2d4482\x2d404b\x2db1a5\x2dbd4bb11efae4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p5hb.mount: Deactivated successfully. Aug 13 00:57:23.757429 systemd[1]: var-lib-kubelet-pods-91ab3916\x2d4482\x2d404b\x2db1a5\x2dbd4bb11efae4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:57:23.757490 systemd[1]: var-lib-kubelet-pods-91ab3916\x2d4482\x2d404b\x2db1a5\x2dbd4bb11efae4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:57:23.757554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15db8255f2ede4a1f1da1fe013f0e37f90fa4bf91f27f78a00f3994d5c4293b9-rootfs.mount: Deactivated successfully. Aug 13 00:57:23.757610 systemd[1]: var-lib-kubelet-pods-8a43b934\x2d62a6\x2d4148\x2dbe81\x2d43e0131241f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqv9x.mount: Deactivated successfully. Aug 13 00:57:23.841273 kubelet[1938]: I0813 00:57:23.841220 1938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a43b934-62a6-4148-be81-43e0131241f4" path="/var/lib/kubelet/pods/8a43b934-62a6-4148-be81-43e0131241f4/volumes" Aug 13 00:57:23.842389 kubelet[1938]: I0813 00:57:23.842359 1938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91ab3916-4482-404b-b1a5-bd4bb11efae4" path="/var/lib/kubelet/pods/91ab3916-4482-404b-b1a5-bd4bb11efae4/volumes" Aug 13 00:57:23.997988 kubelet[1938]: E0813 00:57:23.997890 1938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:57:24.680012 sshd[3633]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:24.685563 systemd[1]: sshd@29-143.198.229.35:22-139.178.68.195:36562.service: Deactivated successfully. Aug 13 00:57:24.687305 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:57:24.687626 systemd[1]: session-27.scope: Consumed 1.461s CPU time. Aug 13 00:57:24.688425 systemd-logind[1182]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:57:24.691227 systemd[1]: Started sshd@30-143.198.229.35:22-139.178.68.195:36574.service. Aug 13 00:57:24.695087 systemd-logind[1182]: Removed session 27. Aug 13 00:57:24.754490 sshd[3796]: Accepted publickey for core from 139.178.68.195 port 36574 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:24.756889 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:24.764811 systemd-logind[1182]: New session 28 of user core. Aug 13 00:57:24.765339 systemd[1]: Started session-28.scope. Aug 13 00:57:25.620385 sshd[3796]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:25.626831 systemd[1]: sshd@30-143.198.229.35:22-139.178.68.195:36574.service: Deactivated successfully. Aug 13 00:57:25.627917 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:57:25.629024 systemd-logind[1182]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:57:25.630852 systemd[1]: Started sshd@31-143.198.229.35:22-139.178.68.195:36588.service. Aug 13 00:57:25.634801 systemd-logind[1182]: Removed session 28. Aug 13 00:57:25.679951 kubelet[1938]: I0813 00:57:25.679882 1938 memory_manager.go:355] "RemoveStaleState removing state" podUID="8a43b934-62a6-4148-be81-43e0131241f4" containerName="cilium-operator" Aug 13 00:57:25.680563 kubelet[1938]: I0813 00:57:25.680518 1938 memory_manager.go:355] "RemoveStaleState removing state" podUID="91ab3916-4482-404b-b1a5-bd4bb11efae4" containerName="cilium-agent" Aug 13 00:57:25.687022 sshd[3807]: Accepted publickey for core from 139.178.68.195 port 36588 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:25.688399 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:25.705356 systemd[1]: Started session-29.scope. Aug 13 00:57:25.707557 systemd-logind[1182]: New session 29 of user core. Aug 13 00:57:25.726200 kubelet[1938]: I0813 00:57:25.726145 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cni-path\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.726566 kubelet[1938]: I0813 00:57:25.726533 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-kernel\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728099 kubelet[1938]: I0813 00:57:25.726720 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hubble-tls\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728309 kubelet[1938]: I0813 00:57:25.728278 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-lib-modules\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728420 kubelet[1938]: I0813 00:57:25.728403 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-xtables-lock\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728541 kubelet[1938]: I0813 00:57:25.728520 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-etc-cni-netd\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728668 kubelet[1938]: I0813 00:57:25.728647 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-run\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.728844 kubelet[1938]: I0813 00:57:25.728821 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-cgroup\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729033 kubelet[1938]: I0813 00:57:25.728998 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szvt\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-kube-api-access-2szvt\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729163 kubelet[1938]: I0813 00:57:25.729139 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-clustermesh-secrets\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729333 kubelet[1938]: I0813 00:57:25.729300 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-config-path\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729462 kubelet[1938]: I0813 00:57:25.729442 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-ipsec-secrets\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729602 kubelet[1938]: I0813 00:57:25.729585 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-net\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729713 kubelet[1938]: I0813 00:57:25.729696 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-bpf-maps\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.729820 kubelet[1938]: I0813 00:57:25.729805 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hostproc\") pod \"cilium-mlch2\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " pod="kube-system/cilium-mlch2" Aug 13 00:57:25.736579 systemd[1]: Created slice kubepods-burstable-podf3a89f1c_3d3d_4b8e_af7a_c783f8e8b5b9.slice. Aug 13 00:57:25.978429 sshd[3807]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:25.984806 systemd[1]: sshd@31-143.198.229.35:22-139.178.68.195:36588.service: Deactivated successfully. Aug 13 00:57:25.986362 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:57:25.987868 systemd-logind[1182]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:57:25.992665 systemd[1]: Started sshd@32-143.198.229.35:22-139.178.68.195:36592.service. Aug 13 00:57:25.994718 systemd-logind[1182]: Removed session 29. Aug 13 00:57:26.015072 kubelet[1938]: E0813 00:57:26.015010 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:26.017454 env[1190]: time="2025-08-13T00:57:26.016475335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlch2,Uid:f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:26.045016 env[1190]: time="2025-08-13T00:57:26.037920080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:26.045016 env[1190]: time="2025-08-13T00:57:26.038046408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:26.045016 env[1190]: time="2025-08-13T00:57:26.038064380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:26.048988 env[1190]: time="2025-08-13T00:57:26.048808190Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e pid=3831 runtime=io.containerd.runc.v2 Aug 13 00:57:26.076613 sshd[3823]: Accepted publickey for core from 139.178.68.195 port 36592 ssh2: RSA SHA256:yzXwfpA3/t+SsypGIsFMny/LATRJUvoUmyalRpmBK78 Aug 13 00:57:26.078221 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:57:26.086571 systemd[1]: Started session-30.scope. Aug 13 00:57:26.088112 systemd-logind[1182]: New session 30 of user core. Aug 13 00:57:26.096663 systemd[1]: Started cri-containerd-1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e.scope. Aug 13 00:57:26.145334 env[1190]: time="2025-08-13T00:57:26.145283193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mlch2,Uid:f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\"" Aug 13 00:57:26.146751 kubelet[1938]: E0813 00:57:26.146714 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:26.153234 env[1190]: time="2025-08-13T00:57:26.153158421Z" level=info msg="CreateContainer within sandbox \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:57:26.180249 env[1190]: time="2025-08-13T00:57:26.180129489Z" level=info msg="CreateContainer within sandbox \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\"" Aug 13 00:57:26.181953 env[1190]: time="2025-08-13T00:57:26.181895909Z" level=info msg="StartContainer for \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\"" Aug 13 00:57:26.216976 systemd[1]: Started cri-containerd-06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23.scope. Aug 13 00:57:26.243411 systemd[1]: cri-containerd-06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23.scope: Deactivated successfully. Aug 13 00:57:26.272169 env[1190]: time="2025-08-13T00:57:26.272049554Z" level=info msg="shim disconnected" id=06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23 Aug 13 00:57:26.272816 env[1190]: time="2025-08-13T00:57:26.272758259Z" level=warning msg="cleaning up after shim disconnected" id=06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23 namespace=k8s.io Aug 13 00:57:26.273216 env[1190]: time="2025-08-13T00:57:26.273184590Z" level=info msg="cleaning up dead shim" Aug 13 00:57:26.293693 env[1190]: time="2025-08-13T00:57:26.293627772Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:57:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2025-08-13T00:57:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Aug 13 00:57:26.294627 env[1190]: time="2025-08-13T00:57:26.294469757Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Aug 13 00:57:26.295304 env[1190]: time="2025-08-13T00:57:26.295190482Z" level=error msg="Failed to pipe stderr of container \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\"" error="reading from a closed fifo" Aug 13 00:57:26.295520 env[1190]: time="2025-08-13T00:57:26.295192061Z" level=error msg="Failed to pipe stdout of container \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\"" error="reading from a closed fifo" Aug 13 00:57:26.300346 env[1190]: time="2025-08-13T00:57:26.300249300Z" level=error msg="StartContainer for \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Aug 13 00:57:26.301236 kubelet[1938]: E0813 00:57:26.301170 1938 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23" Aug 13 00:57:26.310312 kubelet[1938]: E0813 00:57:26.310185 1938 kuberuntime_manager.go:1341] "Unhandled Error" err=< Aug 13 00:57:26.310312 kubelet[1938]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Aug 13 00:57:26.310312 kubelet[1938]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Aug 13 00:57:26.310312 kubelet[1938]: rm /hostbin/cilium-mount Aug 13 00:57:26.310684 kubelet[1938]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2szvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mlch2_kube-system(f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Aug 13 00:57:26.310684 kubelet[1938]: > logger="UnhandledError" Aug 13 00:57:26.311553 kubelet[1938]: E0813 00:57:26.311430 1938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mlch2" podUID="f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" Aug 13 00:57:26.434112 env[1190]: time="2025-08-13T00:57:26.434050357Z" level=info msg="StopPodSandbox for \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\"" Aug 13 00:57:26.434720 env[1190]: time="2025-08-13T00:57:26.434435019Z" level=info msg="Container to stop \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:57:26.448999 kubelet[1938]: I0813 00:57:26.448867 1938 setters.go:602] "Node became not ready" node="ci-3510.3.8-f-585a890caa" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:57:26Z","lastTransitionTime":"2025-08-13T00:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:57:26.452934 systemd[1]: cri-containerd-1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e.scope: Deactivated successfully. Aug 13 00:57:26.509007 env[1190]: time="2025-08-13T00:57:26.508806592Z" level=info msg="shim disconnected" id=1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e Aug 13 00:57:26.509007 env[1190]: time="2025-08-13T00:57:26.508874755Z" level=warning msg="cleaning up after shim disconnected" id=1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e namespace=k8s.io Aug 13 00:57:26.509007 env[1190]: time="2025-08-13T00:57:26.508886057Z" level=info msg="cleaning up dead shim" Aug 13 00:57:26.533212 env[1190]: time="2025-08-13T00:57:26.533140314Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3927 runtime=io.containerd.runc.v2\n" Aug 13 00:57:26.533580 env[1190]: time="2025-08-13T00:57:26.533542614Z" level=info msg="TearDown network for sandbox \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\" successfully" Aug 13 00:57:26.533580 env[1190]: time="2025-08-13T00:57:26.533574175Z" level=info msg="StopPodSandbox for \"1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e\" returns successfully" Aug 13 00:57:26.644228 kubelet[1938]: I0813 00:57:26.644155 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hostproc\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.644667 kubelet[1938]: I0813 00:57:26.644632 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2szvt\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-kube-api-access-2szvt\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.644910 kubelet[1938]: I0813 00:57:26.644854 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-kernel\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.644910 kubelet[1938]: I0813 00:57:26.644899 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-lib-modules\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.644936 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cni-path\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.644991 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-xtables-lock\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645015 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-net\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645044 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-bpf-maps\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645063 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-run\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645085 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-ipsec-secrets\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645104 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hubble-tls\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645132 kubelet[1938]: I0813 00:57:26.645125 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-etc-cni-netd\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645142 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-clustermesh-secrets\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645162 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-config-path\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645181 1938 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-cgroup\") pod \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\" (UID: \"f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9\") " Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.644320 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645241 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645277 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645293 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645307 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645332 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645355 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645376 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.645691 kubelet[1938]: I0813 00:57:26.645396 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.653492 kubelet[1938]: I0813 00:57:26.653426 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:57:26.653724 kubelet[1938]: I0813 00:57:26.653552 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-kube-api-access-2szvt" (OuterVolumeSpecName: "kube-api-access-2szvt") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "kube-api-access-2szvt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:57:26.654516 kubelet[1938]: I0813 00:57:26.654464 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:57:26.654868 kubelet[1938]: I0813 00:57:26.654834 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:57:26.656633 kubelet[1938]: I0813 00:57:26.656582 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:57:26.658622 kubelet[1938]: I0813 00:57:26.658509 1938 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" (UID: "f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746022 1938 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cni-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746089 1938 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-xtables-lock\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746107 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-net\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746124 1938 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-bpf-maps\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746134 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-run\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746144 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-ipsec-secrets\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746153 1938 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hubble-tls\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746163 1938 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-etc-cni-netd\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746172 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-config-path\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746182 1938 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-cilium-cgroup\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.746169 kubelet[1938]: I0813 00:57:26.746190 1938 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-clustermesh-secrets\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.747107 kubelet[1938]: I0813 00:57:26.746200 1938 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-hostproc\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.747107 kubelet[1938]: I0813 00:57:26.746209 1938 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2szvt\" (UniqueName: \"kubernetes.io/projected/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-kube-api-access-2szvt\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.747107 kubelet[1938]: I0813 00:57:26.746217 1938 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-lib-modules\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.747107 kubelet[1938]: I0813 00:57:26.746228 1938 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9-host-proc-sys-kernel\") on node \"ci-3510.3.8-f-585a890caa\" DevicePath \"\"" Aug 13 00:57:26.848047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1eb6702eed6ad0fb983ac3a24987d76ae4bac270b9d62288aadccb07a1b1315e-shm.mount: Deactivated successfully. Aug 13 00:57:26.849214 systemd[1]: var-lib-kubelet-pods-f3a89f1c\x2d3d3d\x2d4b8e\x2daf7a\x2dc783f8e8b5b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2szvt.mount: Deactivated successfully. Aug 13 00:57:26.849294 systemd[1]: var-lib-kubelet-pods-f3a89f1c\x2d3d3d\x2d4b8e\x2daf7a\x2dc783f8e8b5b9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:57:26.849372 systemd[1]: var-lib-kubelet-pods-f3a89f1c\x2d3d3d\x2d4b8e\x2daf7a\x2dc783f8e8b5b9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Aug 13 00:57:26.849464 systemd[1]: var-lib-kubelet-pods-f3a89f1c\x2d3d3d\x2d4b8e\x2daf7a\x2dc783f8e8b5b9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:57:27.440527 kubelet[1938]: I0813 00:57:27.440492 1938 scope.go:117] "RemoveContainer" containerID="06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23" Aug 13 00:57:27.440725 systemd[1]: Removed slice kubepods-burstable-podf3a89f1c_3d3d_4b8e_af7a_c783f8e8b5b9.slice. Aug 13 00:57:27.447006 env[1190]: time="2025-08-13T00:57:27.445616276Z" level=info msg="RemoveContainer for \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\"" Aug 13 00:57:27.449653 env[1190]: time="2025-08-13T00:57:27.449588569Z" level=info msg="RemoveContainer for \"06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23\" returns successfully" Aug 13 00:57:27.504700 kubelet[1938]: I0813 00:57:27.504577 1938 memory_manager.go:355] "RemoveStaleState removing state" podUID="f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" containerName="mount-cgroup" Aug 13 00:57:27.512030 kubelet[1938]: I0813 00:57:27.511923 1938 status_manager.go:890] "Failed to get status for pod" podUID="1acb1521-2f95-4c2d-8b77-9e199a078301" pod="kube-system/cilium-28tlt" err="pods \"cilium-28tlt\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" Aug 13 00:57:27.515875 kubelet[1938]: W0813 00:57:27.515824 1938 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:57:27.516218 kubelet[1938]: E0813 00:57:27.516173 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:57:27.517434 kubelet[1938]: W0813 00:57:27.516815 1938 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:57:27.517569 kubelet[1938]: E0813 00:57:27.517446 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:57:27.517569 kubelet[1938]: W0813 00:57:27.517369 1938 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.8-f-585a890caa" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object Aug 13 00:57:27.517569 kubelet[1938]: E0813 00:57:27.517486 1938 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-3510.3.8-f-585a890caa\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-f-585a890caa' and this object" logger="UnhandledError" Aug 13 00:57:27.521059 systemd[1]: Created slice kubepods-burstable-pod1acb1521_2f95_4c2d_8b77_9e199a078301.slice. Aug 13 00:57:27.553597 kubelet[1938]: I0813 00:57:27.553550 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-config-path\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.553916 kubelet[1938]: I0813 00:57:27.553892 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-etc-cni-netd\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554187 kubelet[1938]: I0813 00:57:27.554142 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-host-proc-sys-kernel\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554254 kubelet[1938]: I0813 00:57:27.554191 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8szk\" (UniqueName: \"kubernetes.io/projected/1acb1521-2f95-4c2d-8b77-9e199a078301-kube-api-access-m8szk\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554254 kubelet[1938]: I0813 00:57:27.554225 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1acb1521-2f95-4c2d-8b77-9e199a078301-hubble-tls\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554254 kubelet[1938]: I0813 00:57:27.554250 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-bpf-maps\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554351 kubelet[1938]: I0813 00:57:27.554274 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-cni-path\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554351 kubelet[1938]: I0813 00:57:27.554299 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-clustermesh-secrets\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554351 kubelet[1938]: I0813 00:57:27.554322 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-run\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554448 kubelet[1938]: I0813 00:57:27.554348 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-ipsec-secrets\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554448 kubelet[1938]: I0813 00:57:27.554379 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-hostproc\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554448 kubelet[1938]: I0813 00:57:27.554406 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-cgroup\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554448 kubelet[1938]: I0813 00:57:27.554428 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-lib-modules\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554563 kubelet[1938]: I0813 00:57:27.554459 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-xtables-lock\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.554563 kubelet[1938]: I0813 00:57:27.554484 1938 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1acb1521-2f95-4c2d-8b77-9e199a078301-host-proc-sys-net\") pod \"cilium-28tlt\" (UID: \"1acb1521-2f95-4c2d-8b77-9e199a078301\") " pod="kube-system/cilium-28tlt" Aug 13 00:57:27.840408 kubelet[1938]: I0813 00:57:27.840259 1938 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9" path="/var/lib/kubelet/pods/f3a89f1c-3d3d-4b8e-af7a-c783f8e8b5b9/volumes" Aug 13 00:57:28.656907 kubelet[1938]: E0813 00:57:28.656807 1938 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Aug 13 00:57:28.657207 kubelet[1938]: E0813 00:57:28.656992 1938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-ipsec-secrets podName:1acb1521-2f95-4c2d-8b77-9e199a078301 nodeName:}" failed. No retries permitted until 2025-08-13 00:57:29.156927353 +0000 UTC m=+145.604177897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-cilium-ipsec-secrets") pod "cilium-28tlt" (UID: "1acb1521-2f95-4c2d-8b77-9e199a078301") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:57:28.657352 kubelet[1938]: E0813 00:57:28.657214 1938 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Aug 13 00:57:28.657352 kubelet[1938]: E0813 00:57:28.657263 1938 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-clustermesh-secrets podName:1acb1521-2f95-4c2d-8b77-9e199a078301 nodeName:}" failed. No retries permitted until 2025-08-13 00:57:29.1572482 +0000 UTC m=+145.604498705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/1acb1521-2f95-4c2d-8b77-9e199a078301-clustermesh-secrets") pod "cilium-28tlt" (UID: "1acb1521-2f95-4c2d-8b77-9e199a078301") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:57:28.999940 kubelet[1938]: E0813 00:57:28.999854 1938 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:57:29.325425 kubelet[1938]: E0813 00:57:29.325268 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:29.326260 env[1190]: time="2025-08-13T00:57:29.326191438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28tlt,Uid:1acb1521-2f95-4c2d-8b77-9e199a078301,Namespace:kube-system,Attempt:0,}" Aug 13 00:57:29.349039 env[1190]: time="2025-08-13T00:57:29.348921170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:57:29.349039 env[1190]: time="2025-08-13T00:57:29.349053331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:57:29.349351 env[1190]: time="2025-08-13T00:57:29.349090507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:57:29.349434 env[1190]: time="2025-08-13T00:57:29.349383278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4 pid=3955 runtime=io.containerd.runc.v2 Aug 13 00:57:29.379261 systemd[1]: Started cri-containerd-5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4.scope. Aug 13 00:57:29.383012 kubelet[1938]: W0813 00:57:29.382114 1938 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3a89f1c_3d3d_4b8e_af7a_c783f8e8b5b9.slice/cri-containerd-06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23.scope WatchSource:0}: container "06b51c6ce99990c499845c01c76564d2e6f610bdf8362444470ff50e892a6f23" in namespace "k8s.io": not found Aug 13 00:57:29.421342 env[1190]: time="2025-08-13T00:57:29.421254414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-28tlt,Uid:1acb1521-2f95-4c2d-8b77-9e199a078301,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\"" Aug 13 00:57:29.423097 kubelet[1938]: E0813 00:57:29.422527 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:29.428676 env[1190]: time="2025-08-13T00:57:29.428604162Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:57:29.477579 env[1190]: time="2025-08-13T00:57:29.477367252Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc\"" Aug 13 00:57:29.478605 env[1190]: time="2025-08-13T00:57:29.478567099Z" level=info msg="StartContainer for \"5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc\"" Aug 13 00:57:29.502147 systemd[1]: Started cri-containerd-5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc.scope. Aug 13 00:57:29.588578 env[1190]: time="2025-08-13T00:57:29.588433261Z" level=info msg="StartContainer for \"5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc\" returns successfully" Aug 13 00:57:29.606352 systemd[1]: cri-containerd-5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc.scope: Deactivated successfully. Aug 13 00:57:29.639305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc-rootfs.mount: Deactivated successfully. Aug 13 00:57:29.656861 env[1190]: time="2025-08-13T00:57:29.656801725Z" level=info msg="shim disconnected" id=5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc Aug 13 00:57:29.657245 env[1190]: time="2025-08-13T00:57:29.657197114Z" level=warning msg="cleaning up after shim disconnected" id=5dd3fb3312b2a589aab15318a31539be5de5b3d77672e5d59ef043389caa74fc namespace=k8s.io Aug 13 00:57:29.657379 env[1190]: time="2025-08-13T00:57:29.657358942Z" level=info msg="cleaning up dead shim" Aug 13 00:57:29.670909 env[1190]: time="2025-08-13T00:57:29.670856601Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Aug 13 00:57:30.463326 kubelet[1938]: E0813 00:57:30.463283 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:30.466427 env[1190]: time="2025-08-13T00:57:30.466379966Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:57:30.498000 env[1190]: time="2025-08-13T00:57:30.497885388Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca\"" Aug 13 00:57:30.500013 env[1190]: time="2025-08-13T00:57:30.498993121Z" level=info msg="StartContainer for \"8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca\"" Aug 13 00:57:30.529736 systemd[1]: Started cri-containerd-8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca.scope. Aug 13 00:57:30.595076 env[1190]: time="2025-08-13T00:57:30.595005089Z" level=info msg="StartContainer for \"8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca\" returns successfully" Aug 13 00:57:30.604893 systemd[1]: cri-containerd-8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca.scope: Deactivated successfully. Aug 13 00:57:30.640478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca-rootfs.mount: Deactivated successfully. Aug 13 00:57:30.651669 env[1190]: time="2025-08-13T00:57:30.651611901Z" level=info msg="shim disconnected" id=8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca Aug 13 00:57:30.652217 env[1190]: time="2025-08-13T00:57:30.652184759Z" level=warning msg="cleaning up after shim disconnected" id=8f01296832f73c787917c6833cf82ac924d70b6abe3723be6288a8c108c80cca namespace=k8s.io Aug 13 00:57:30.652325 env[1190]: time="2025-08-13T00:57:30.652306112Z" level=info msg="cleaning up dead shim" Aug 13 00:57:30.676793 env[1190]: time="2025-08-13T00:57:30.676669171Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4105 runtime=io.containerd.runc.v2\n" Aug 13 00:57:30.838408 kubelet[1938]: E0813 00:57:30.838253 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:31.467829 kubelet[1938]: E0813 00:57:31.467781 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:31.476464 env[1190]: time="2025-08-13T00:57:31.476378479Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:57:31.494734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4231431825.mount: Deactivated successfully. Aug 13 00:57:31.511301 env[1190]: time="2025-08-13T00:57:31.511186795Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a\"" Aug 13 00:57:31.513077 env[1190]: time="2025-08-13T00:57:31.512747264Z" level=info msg="StartContainer for \"6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a\"" Aug 13 00:57:31.573469 systemd[1]: Started cri-containerd-6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a.scope. Aug 13 00:57:31.636371 systemd[1]: cri-containerd-6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a.scope: Deactivated successfully. Aug 13 00:57:31.642028 env[1190]: time="2025-08-13T00:57:31.641884657Z" level=info msg="StartContainer for \"6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a\" returns successfully" Aug 13 00:57:31.673838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a-rootfs.mount: Deactivated successfully. Aug 13 00:57:31.683224 env[1190]: time="2025-08-13T00:57:31.683137268Z" level=info msg="shim disconnected" id=6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a Aug 13 00:57:31.683627 env[1190]: time="2025-08-13T00:57:31.683592675Z" level=warning msg="cleaning up after shim disconnected" id=6317b740ffc3f33eb86c23c0ea277477275c4437f19f759b45e89ae0f7603c6a namespace=k8s.io Aug 13 00:57:31.683821 env[1190]: time="2025-08-13T00:57:31.683787759Z" level=info msg="cleaning up dead shim" Aug 13 00:57:31.698296 env[1190]: time="2025-08-13T00:57:31.698191230Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4163 runtime=io.containerd.runc.v2\n" Aug 13 00:57:32.474880 kubelet[1938]: E0813 00:57:32.474815 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:32.484567 env[1190]: time="2025-08-13T00:57:32.484286073Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:57:32.523051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685471438.mount: Deactivated successfully. Aug 13 00:57:32.549952 env[1190]: time="2025-08-13T00:57:32.549378536Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468\"" Aug 13 00:57:32.550505 env[1190]: time="2025-08-13T00:57:32.550456419Z" level=info msg="StartContainer for \"a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468\"" Aug 13 00:57:32.580473 systemd[1]: Started cri-containerd-a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468.scope. Aug 13 00:57:32.628569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032536534.mount: Deactivated successfully. Aug 13 00:57:32.636789 systemd[1]: cri-containerd-a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468.scope: Deactivated successfully. Aug 13 00:57:32.642952 env[1190]: time="2025-08-13T00:57:32.642871910Z" level=info msg="StartContainer for \"a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468\" returns successfully" Aug 13 00:57:32.670217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468-rootfs.mount: Deactivated successfully. Aug 13 00:57:32.684673 env[1190]: time="2025-08-13T00:57:32.684558047Z" level=info msg="shim disconnected" id=a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468 Aug 13 00:57:32.684673 env[1190]: time="2025-08-13T00:57:32.684670799Z" level=warning msg="cleaning up after shim disconnected" id=a10f9f55543bd115bd537fc6713d048f795f95414fe21d42cd474d6d0eb0b468 namespace=k8s.io Aug 13 00:57:32.684673 env[1190]: time="2025-08-13T00:57:32.684688462Z" level=info msg="cleaning up dead shim" Aug 13 00:57:32.697938 env[1190]: time="2025-08-13T00:57:32.697817992Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:57:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4221 runtime=io.containerd.runc.v2\n" Aug 13 00:57:33.480175 kubelet[1938]: E0813 00:57:33.480129 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:33.487085 env[1190]: time="2025-08-13T00:57:33.486939931Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:57:33.523126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1456652244.mount: Deactivated successfully. Aug 13 00:57:33.532303 env[1190]: time="2025-08-13T00:57:33.532208583Z" level=info msg="CreateContainer within sandbox \"5b736fd1aa45a38e4fb90abb673be91358881ef61601b96fe98c39b1cf0f80f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527\"" Aug 13 00:57:33.534295 env[1190]: time="2025-08-13T00:57:33.534240321Z" level=info msg="StartContainer for \"02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527\"" Aug 13 00:57:33.566220 systemd[1]: Started cri-containerd-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527.scope. Aug 13 00:57:33.619034 env[1190]: time="2025-08-13T00:57:33.618933818Z" level=info msg="StartContainer for \"02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527\" returns successfully" Aug 13 00:57:33.651186 systemd[1]: run-containerd-runc-k8s.io-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527-runc.8nzdZe.mount: Deactivated successfully. Aug 13 00:57:34.128012 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:57:34.486456 kubelet[1938]: E0813 00:57:34.486405 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:35.489298 kubelet[1938]: E0813 00:57:35.489242 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:35.758010 sshd[3459]: kex_exchange_identification: Connection closed by remote host Aug 13 00:57:35.758010 sshd[3459]: Connection closed by 183.171.215.115 port 57726 Aug 13 00:57:35.758360 systemd[1]: sshd@14-143.198.229.35:22-183.171.215.115:57726.service: Deactivated successfully. Aug 13 00:57:36.491083 kubelet[1938]: E0813 00:57:36.490951 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:36.701669 systemd[1]: run-containerd-runc-k8s.io-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527-runc.4A9YVi.mount: Deactivated successfully. Aug 13 00:57:37.709384 systemd-networkd[1008]: lxc_health: Link UP Aug 13 00:57:37.721010 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Aug 13 00:57:37.721174 systemd-networkd[1008]: lxc_health: Gained carrier Aug 13 00:57:38.840537 kubelet[1938]: E0813 00:57:38.840474 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:38.950101 systemd[1]: run-containerd-runc-k8s.io-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527-runc.xsYaCB.mount: Deactivated successfully. Aug 13 00:57:39.328264 kubelet[1938]: E0813 00:57:39.328198 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:39.369326 kubelet[1938]: I0813 00:57:39.369234 1938 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-28tlt" podStartSLOduration=12.369205356 podStartE2EDuration="12.369205356s" podCreationTimestamp="2025-08-13 00:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:57:34.524717525 +0000 UTC m=+150.971968031" watchObservedRunningTime="2025-08-13 00:57:39.369205356 +0000 UTC m=+155.816455866" Aug 13 00:57:39.498765 kubelet[1938]: E0813 00:57:39.498707 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:39.604317 systemd-networkd[1008]: lxc_health: Gained IPv6LL Aug 13 00:57:40.500690 kubelet[1938]: E0813 00:57:40.500646 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:41.227430 systemd[1]: run-containerd-runc-k8s.io-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527-runc.zNNAAM.mount: Deactivated successfully. Aug 13 00:57:43.419410 systemd[1]: run-containerd-runc-k8s.io-02ce98dde1079fd1ba08b2b49604e7e54746afad83b791d636c7fe60315c5527-runc.atkPux.mount: Deactivated successfully. Aug 13 00:57:43.534805 sshd[3823]: pam_unix(sshd:session): session closed for user core Aug 13 00:57:43.538870 systemd[1]: sshd@32-143.198.229.35:22-139.178.68.195:36592.service: Deactivated successfully. Aug 13 00:57:43.539937 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:57:43.540933 systemd-logind[1182]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:57:43.542115 systemd-logind[1182]: Removed session 30. Aug 13 00:57:43.838361 kubelet[1938]: E0813 00:57:43.838307 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Aug 13 00:57:44.838097 kubelet[1938]: E0813 00:57:44.838044 1938 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3"