Sep 13 00:47:12.025912 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Sep 12 23:13:49 -00 2025 Sep 13 00:47:12.025941 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:12.025960 kernel: BIOS-provided physical RAM map: Sep 13 00:47:12.025980 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:47:12.025990 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:47:12.029033 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:47:12.029043 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:47:12.029050 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:47:12.029063 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:47:12.029070 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:47:12.029077 kernel: NX (Execute Disable) protection: active Sep 13 00:47:12.029083 kernel: SMBIOS 2.8 present. Sep 13 00:47:12.029090 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:47:12.029097 kernel: Hypervisor detected: KVM Sep 13 00:47:12.029105 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:47:12.029115 kernel: kvm-clock: cpu 0, msr 4e19f001, primary cpu clock Sep 13 00:47:12.029123 kernel: kvm-clock: using sched offset of 3712228271 cycles Sep 13 00:47:12.029131 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:47:12.029145 kernel: tsc: Detected 2494.140 MHz processor Sep 13 00:47:12.029152 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:47:12.029160 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:47:12.029168 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:47:12.029175 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:47:12.029185 kernel: ACPI: Early table checksum verification disabled Sep 13 00:47:12.029193 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:47:12.029200 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029208 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029215 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029223 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:47:12.029230 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029237 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029244 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029254 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:47:12.029262 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:47:12.029269 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:47:12.029276 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:47:12.029284 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:47:12.029291 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:47:12.029298 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:47:12.029305 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:47:12.029320 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:47:12.029327 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:47:12.029335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:47:12.029343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:47:12.029351 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:47:12.029359 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:47:12.029370 kernel: Zone ranges: Sep 13 00:47:12.029378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:47:12.029386 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:47:12.029394 kernel: Normal empty Sep 13 00:47:12.029402 kernel: Movable zone start for each node Sep 13 00:47:12.029409 kernel: Early memory node ranges Sep 13 00:47:12.029417 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:47:12.029425 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:47:12.029433 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:47:12.029443 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:47:12.029454 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:47:12.029462 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:47:12.029470 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:47:12.029477 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:47:12.029485 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:47:12.029493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:47:12.029502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:47:12.029510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:47:12.029520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:47:12.029530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:47:12.029538 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:47:12.029546 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:47:12.029554 kernel: TSC deadline timer available Sep 13 00:47:12.029562 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:47:12.029570 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:47:12.029577 kernel: Booting paravirtualized kernel on KVM Sep 13 00:47:12.029585 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:47:12.029596 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:47:12.029604 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Sep 13 00:47:12.029612 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Sep 13 00:47:12.029620 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:47:12.029628 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Sep 13 00:47:12.029636 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:47:12.029643 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:47:12.029651 kernel: Policy zone: DMA32 Sep 13 00:47:12.029661 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:12.029671 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:47:12.029679 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:47:12.029687 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:47:12.029694 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:47:12.029702 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47492K init, 4088K bss, 123076K reserved, 0K cma-reserved) Sep 13 00:47:12.029710 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:47:12.029718 kernel: Kernel/User page tables isolation: enabled Sep 13 00:47:12.029726 kernel: ftrace: allocating 34614 entries in 136 pages Sep 13 00:47:12.029736 kernel: ftrace: allocated 136 pages with 2 groups Sep 13 00:47:12.029744 kernel: rcu: Hierarchical RCU implementation. Sep 13 00:47:12.029752 kernel: rcu: RCU event tracing is enabled. Sep 13 00:47:12.029761 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:47:12.029769 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:47:12.029776 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:47:12.029785 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:47:12.029793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:47:12.029801 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:47:12.029811 kernel: random: crng init done Sep 13 00:47:12.029818 kernel: Console: colour VGA+ 80x25 Sep 13 00:47:12.029826 kernel: printk: console [tty0] enabled Sep 13 00:47:12.029834 kernel: printk: console [ttyS0] enabled Sep 13 00:47:12.029842 kernel: ACPI: Core revision 20210730 Sep 13 00:47:12.029850 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:47:12.029857 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:47:12.029865 kernel: x2apic enabled Sep 13 00:47:12.029873 kernel: Switched APIC routing to physical x2apic. Sep 13 00:47:12.029881 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:47:12.029891 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:47:12.029899 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Sep 13 00:47:12.029911 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:47:12.029919 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:47:12.029927 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:47:12.029934 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:47:12.029942 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:47:12.029950 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:47:12.029961 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:47:12.029997 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Sep 13 00:47:12.030005 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:47:12.030016 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:47:12.030024 kernel: active return thunk: its_return_thunk Sep 13 00:47:12.030032 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:47:12.030041 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:47:12.030049 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:47:12.030057 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:47:12.030066 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:47:12.030077 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:47:12.030085 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:47:12.030093 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:47:12.030101 kernel: LSM: Security Framework initializing Sep 13 00:47:12.030110 kernel: SELinux: Initializing. Sep 13 00:47:12.030118 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:47:12.030126 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:47:12.030138 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:47:12.030146 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:47:12.030155 kernel: signal: max sigframe size: 1776 Sep 13 00:47:12.030163 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:47:12.030171 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:47:12.030180 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:47:12.030188 kernel: x86: Booting SMP configuration: Sep 13 00:47:12.030196 kernel: .... node #0, CPUs: #1 Sep 13 00:47:12.030205 kernel: kvm-clock: cpu 1, msr 4e19f041, secondary cpu clock Sep 13 00:47:12.030215 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Sep 13 00:47:12.030223 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:47:12.030232 kernel: smpboot: Max logical packages: 1 Sep 13 00:47:12.030240 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Sep 13 00:47:12.030249 kernel: devtmpfs: initialized Sep 13 00:47:12.030257 kernel: x86/mm: Memory block size: 128MB Sep 13 00:47:12.030265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:47:12.030274 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:47:12.030282 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:47:12.030292 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:47:12.030304 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:47:12.030317 kernel: audit: type=2000 audit(1757724431.227:1): state=initialized audit_enabled=0 res=1 Sep 13 00:47:12.030328 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:47:12.030339 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:47:12.030355 kernel: cpuidle: using governor menu Sep 13 00:47:12.030370 kernel: ACPI: bus type PCI registered Sep 13 00:47:12.030382 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:47:12.030394 kernel: dca service started, version 1.12.1 Sep 13 00:47:12.030409 kernel: PCI: Using configuration type 1 for base access Sep 13 00:47:12.030421 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:47:12.030433 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:47:12.030445 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:47:12.030458 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:47:12.030470 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:47:12.030483 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 13 00:47:12.030496 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 13 00:47:12.030509 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 13 00:47:12.030520 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:47:12.030529 kernel: ACPI: Interpreter enabled Sep 13 00:47:12.030537 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:47:12.030545 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:47:12.030554 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:47:12.030562 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:47:12.030570 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:47:12.030789 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:47:12.030890 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Sep 13 00:47:12.030902 kernel: acpiphp: Slot [3] registered Sep 13 00:47:12.030910 kernel: acpiphp: Slot [4] registered Sep 13 00:47:12.030919 kernel: acpiphp: Slot [5] registered Sep 13 00:47:12.030927 kernel: acpiphp: Slot [6] registered Sep 13 00:47:12.030935 kernel: acpiphp: Slot [7] registered Sep 13 00:47:12.030943 kernel: acpiphp: Slot [8] registered Sep 13 00:47:12.030952 kernel: acpiphp: Slot [9] registered Sep 13 00:47:12.030960 kernel: acpiphp: Slot [10] registered Sep 13 00:47:12.030992 kernel: acpiphp: Slot [11] registered Sep 13 00:47:12.031001 kernel: acpiphp: Slot [12] registered Sep 13 00:47:12.031010 kernel: acpiphp: Slot [13] registered Sep 13 00:47:12.031018 kernel: acpiphp: Slot [14] registered Sep 13 00:47:12.031026 kernel: acpiphp: Slot [15] registered Sep 13 00:47:12.031035 kernel: acpiphp: Slot [16] registered Sep 13 00:47:12.031043 kernel: acpiphp: Slot [17] registered Sep 13 00:47:12.031051 kernel: acpiphp: Slot [18] registered Sep 13 00:47:12.031060 kernel: acpiphp: Slot [19] registered Sep 13 00:47:12.031071 kernel: acpiphp: Slot [20] registered Sep 13 00:47:12.031079 kernel: acpiphp: Slot [21] registered Sep 13 00:47:12.031088 kernel: acpiphp: Slot [22] registered Sep 13 00:47:12.031096 kernel: acpiphp: Slot [23] registered Sep 13 00:47:12.031104 kernel: acpiphp: Slot [24] registered Sep 13 00:47:12.031113 kernel: acpiphp: Slot [25] registered Sep 13 00:47:12.031121 kernel: acpiphp: Slot [26] registered Sep 13 00:47:12.031129 kernel: acpiphp: Slot [27] registered Sep 13 00:47:12.031138 kernel: acpiphp: Slot [28] registered Sep 13 00:47:12.031146 kernel: acpiphp: Slot [29] registered Sep 13 00:47:12.031157 kernel: acpiphp: Slot [30] registered Sep 13 00:47:12.031166 kernel: acpiphp: Slot [31] registered Sep 13 00:47:12.031174 kernel: PCI host bridge to bus 0000:00 Sep 13 00:47:12.031281 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:47:12.031408 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:47:12.031493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:47:12.031575 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:47:12.031661 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:47:12.031742 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:47:12.031858 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:47:12.031965 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:47:12.035225 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:47:12.035338 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:47:12.035484 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:47:12.035599 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:47:12.035706 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:47:12.035808 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:47:12.035914 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:47:12.036084 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:47:12.036199 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:47:12.036294 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:47:12.036397 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:47:12.036541 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:47:12.036659 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:47:12.036758 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:47:12.036849 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:47:12.036956 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:47:12.037072 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:47:12.037181 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:47:12.037270 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:47:12.037358 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:47:12.037446 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:47:12.037584 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:47:12.037737 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:47:12.037840 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:47:12.037932 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:47:12.038052 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:47:12.038182 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:47:12.038280 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:47:12.038369 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:47:12.038470 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:47:12.038579 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:47:12.038672 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:47:12.038760 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:47:12.038859 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:47:12.038987 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:47:12.039085 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:47:12.039173 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:47:12.039312 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:47:12.039411 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:47:12.039498 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:47:12.039509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:47:12.039518 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:47:12.039527 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:47:12.039539 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:47:12.039547 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:47:12.039556 kernel: iommu: Default domain type: Translated Sep 13 00:47:12.039565 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:47:12.039655 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:47:12.039745 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:47:12.039833 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:47:12.039844 kernel: vgaarb: loaded Sep 13 00:47:12.039853 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 13 00:47:12.039864 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 13 00:47:12.039873 kernel: PTP clock support registered Sep 13 00:47:12.039881 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:47:12.039891 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:47:12.039900 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:47:12.039908 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:47:12.039916 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:47:12.039925 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:47:12.039933 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:47:12.039944 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:47:12.039953 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:47:12.039962 kernel: pnp: PnP ACPI init Sep 13 00:47:12.039978 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:47:12.039987 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:47:12.039996 kernel: NET: Registered PF_INET protocol family Sep 13 00:47:12.040004 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:47:12.040013 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:47:12.040038 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:47:12.040052 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:47:12.040064 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 13 00:47:12.040077 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:47:12.040089 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:47:12.040101 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:47:12.040110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:47:12.040119 kernel: NET: Registered PF_XDP protocol family Sep 13 00:47:12.040253 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:47:12.040343 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:47:12.040424 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:47:12.040504 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:47:12.040595 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:47:12.040697 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:47:12.040793 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:47:12.040885 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Sep 13 00:47:12.040898 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:47:12.041003 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 30118 usecs Sep 13 00:47:12.041014 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:47:12.041023 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:47:12.041032 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Sep 13 00:47:12.041041 kernel: Initialise system trusted keyrings Sep 13 00:47:12.041049 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:47:12.041058 kernel: Key type asymmetric registered Sep 13 00:47:12.041066 kernel: Asymmetric key parser 'x509' registered Sep 13 00:47:12.041074 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 13 00:47:12.041086 kernel: io scheduler mq-deadline registered Sep 13 00:47:12.041094 kernel: io scheduler kyber registered Sep 13 00:47:12.041103 kernel: io scheduler bfq registered Sep 13 00:47:12.041111 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:47:12.041120 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:47:12.041129 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:47:12.041137 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:47:12.041146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:47:12.041154 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:47:12.041165 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:47:12.041174 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:47:12.041183 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:47:12.041192 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:47:12.041341 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:47:12.041449 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:47:12.041533 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:47:11 UTC (1757724431) Sep 13 00:47:12.041640 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:47:12.041664 kernel: intel_pstate: CPU model not supported Sep 13 00:47:12.041676 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:47:12.041687 kernel: Segment Routing with IPv6 Sep 13 00:47:12.041700 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:47:12.041713 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:47:12.041725 kernel: Key type dns_resolver registered Sep 13 00:47:12.041733 kernel: IPI shorthand broadcast: enabled Sep 13 00:47:12.041742 kernel: sched_clock: Marking stable (604056324, 83258807)->(790566084, -103250953) Sep 13 00:47:12.041751 kernel: registered taskstats version 1 Sep 13 00:47:12.041762 kernel: Loading compiled-in X.509 certificates Sep 13 00:47:12.041771 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: d4931373bb0d9b9f95da11f02ae07d3649cc6c37' Sep 13 00:47:12.041779 kernel: Key type .fscrypt registered Sep 13 00:47:12.041788 kernel: Key type fscrypt-provisioning registered Sep 13 00:47:12.041796 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:47:12.041805 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:47:12.041814 kernel: ima: No architecture policies found Sep 13 00:47:12.041822 kernel: clk: Disabling unused clocks Sep 13 00:47:12.041833 kernel: Freeing unused kernel image (initmem) memory: 47492K Sep 13 00:47:12.041841 kernel: Write protecting the kernel read-only data: 28672k Sep 13 00:47:12.041850 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Sep 13 00:47:12.041858 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Sep 13 00:47:12.041867 kernel: Run /init as init process Sep 13 00:47:12.041875 kernel: with arguments: Sep 13 00:47:12.041904 kernel: /init Sep 13 00:47:12.041916 kernel: with environment: Sep 13 00:47:12.041925 kernel: HOME=/ Sep 13 00:47:12.041962 kernel: TERM=linux Sep 13 00:47:12.050047 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:47:12.050075 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:47:12.050090 systemd[1]: Detected virtualization kvm. Sep 13 00:47:12.050100 systemd[1]: Detected architecture x86-64. Sep 13 00:47:12.050110 systemd[1]: Running in initrd. Sep 13 00:47:12.050119 systemd[1]: No hostname configured, using default hostname. Sep 13 00:47:12.050128 systemd[1]: Hostname set to . Sep 13 00:47:12.050144 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:47:12.050154 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:47:12.050163 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:47:12.050172 systemd[1]: Reached target cryptsetup.target. Sep 13 00:47:12.050181 systemd[1]: Reached target paths.target. Sep 13 00:47:12.050190 systemd[1]: Reached target slices.target. Sep 13 00:47:12.050199 systemd[1]: Reached target swap.target. Sep 13 00:47:12.050209 systemd[1]: Reached target timers.target. Sep 13 00:47:12.050221 systemd[1]: Listening on iscsid.socket. Sep 13 00:47:12.050231 systemd[1]: Listening on iscsiuio.socket. Sep 13 00:47:12.050240 systemd[1]: Listening on systemd-journald-audit.socket. Sep 13 00:47:12.050249 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 13 00:47:12.050258 systemd[1]: Listening on systemd-journald.socket. Sep 13 00:47:12.050268 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:47:12.050278 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:47:12.050287 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:47:12.050302 systemd[1]: Reached target sockets.target. Sep 13 00:47:12.050312 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:47:12.050324 systemd[1]: Finished network-cleanup.service. Sep 13 00:47:12.050334 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:47:12.050344 systemd[1]: Starting systemd-journald.service... Sep 13 00:47:12.050353 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:47:12.050364 systemd[1]: Starting systemd-resolved.service... Sep 13 00:47:12.050374 systemd[1]: Starting systemd-vconsole-setup.service... Sep 13 00:47:12.050394 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:47:12.050403 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:47:12.050413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:47:12.050429 systemd-journald[184]: Journal started Sep 13 00:47:12.050514 systemd-journald[184]: Runtime Journal (/run/log/journal/1c90a5770b91439cbba4a4bcbf6b3e5e) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:47:12.042216 systemd-modules-load[185]: Inserted module 'overlay' Sep 13 00:47:12.052805 systemd-resolved[186]: Positive Trust Anchors: Sep 13 00:47:12.052815 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:47:12.100126 systemd[1]: Started systemd-journald.service. Sep 13 00:47:12.100172 kernel: audit: type=1130 audit(1757724432.081:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.100207 kernel: audit: type=1130 audit(1757724432.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.100226 kernel: audit: type=1130 audit(1757724432.082:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.100246 kernel: audit: type=1130 audit(1757724432.083:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.100265 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:47:12.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.052849 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:47:12.055837 systemd-resolved[186]: Defaulting to hostname 'linux'. Sep 13 00:47:12.082662 systemd[1]: Started systemd-resolved.service. Sep 13 00:47:12.083386 systemd[1]: Finished systemd-vconsole-setup.service. Sep 13 00:47:12.084040 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:47:12.084606 systemd[1]: Reached target nss-lookup.target. Sep 13 00:47:12.092364 systemd[1]: Starting dracut-cmdline-ask.service... Sep 13 00:47:12.112105 systemd[1]: Finished dracut-cmdline-ask.service. Sep 13 00:47:12.114509 kernel: Bridge firewalling registered Sep 13 00:47:12.113821 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 13 00:47:12.120039 kernel: audit: type=1130 audit(1757724432.114:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.116583 systemd[1]: Starting dracut-cmdline.service... Sep 13 00:47:12.131565 dracut-cmdline[201]: dracut-dracut-053 Sep 13 00:47:12.137702 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=65d14b740db9e581daa1d0206188b16d2f1a39e5c5e0878b6855323cd7c584ec Sep 13 00:47:12.141007 kernel: SCSI subsystem initialized Sep 13 00:47:12.156384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:47:12.156450 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:47:12.156465 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 13 00:47:12.161989 systemd-modules-load[185]: Inserted module 'dm_multipath' Sep 13 00:47:12.164209 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:47:12.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.173382 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:47:12.182602 kernel: audit: type=1130 audit(1757724432.171:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.186750 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:47:12.191262 kernel: audit: type=1130 audit(1757724432.186:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.228012 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:47:12.254010 kernel: iscsi: registered transport (tcp) Sep 13 00:47:12.282024 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:47:12.282139 kernel: QLogic iSCSI HBA Driver Sep 13 00:47:12.333015 systemd[1]: Finished dracut-cmdline.service. Sep 13 00:47:12.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.334897 systemd[1]: Starting dracut-pre-udev.service... Sep 13 00:47:12.338114 kernel: audit: type=1130 audit(1757724432.332:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.402040 kernel: raid6: avx2x4 gen() 17080 MB/s Sep 13 00:47:12.419041 kernel: raid6: avx2x4 xor() 7212 MB/s Sep 13 00:47:12.436108 kernel: raid6: avx2x2 gen() 17305 MB/s Sep 13 00:47:12.453049 kernel: raid6: avx2x2 xor() 20623 MB/s Sep 13 00:47:12.470043 kernel: raid6: avx2x1 gen() 12829 MB/s Sep 13 00:47:12.487032 kernel: raid6: avx2x1 xor() 17702 MB/s Sep 13 00:47:12.504098 kernel: raid6: sse2x4 gen() 10634 MB/s Sep 13 00:47:12.521051 kernel: raid6: sse2x4 xor() 5679 MB/s Sep 13 00:47:12.538059 kernel: raid6: sse2x2 gen() 10311 MB/s Sep 13 00:47:12.555043 kernel: raid6: sse2x2 xor() 8168 MB/s Sep 13 00:47:12.572072 kernel: raid6: sse2x1 gen() 8767 MB/s Sep 13 00:47:12.589643 kernel: raid6: sse2x1 xor() 5422 MB/s Sep 13 00:47:12.589746 kernel: raid6: using algorithm avx2x2 gen() 17305 MB/s Sep 13 00:47:12.589778 kernel: raid6: .... xor() 20623 MB/s, rmw enabled Sep 13 00:47:12.590370 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:47:12.608054 kernel: xor: automatically using best checksumming function avx Sep 13 00:47:12.741027 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Sep 13 00:47:12.757201 systemd[1]: Finished dracut-pre-udev.service. Sep 13 00:47:12.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.761895 systemd[1]: Starting systemd-udevd.service... Sep 13 00:47:12.762952 kernel: audit: type=1130 audit(1757724432.756:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.757000 audit: BPF prog-id=7 op=LOAD Sep 13 00:47:12.757000 audit: BPF prog-id=8 op=LOAD Sep 13 00:47:12.785478 systemd-udevd[384]: Using default interface naming scheme 'v252'. Sep 13 00:47:12.794465 systemd[1]: Started systemd-udevd.service. Sep 13 00:47:12.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.796174 systemd[1]: Starting dracut-pre-trigger.service... Sep 13 00:47:12.814244 dracut-pre-trigger[385]: rd.md=0: removing MD RAID activation Sep 13 00:47:12.867517 systemd[1]: Finished dracut-pre-trigger.service. Sep 13 00:47:12.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.869175 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:47:12.922882 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:47:12.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:12.984367 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:47:13.035553 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:47:13.035575 kernel: GPT:9289727 != 125829119 Sep 13 00:47:13.035586 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:47:13.035603 kernel: GPT:9289727 != 125829119 Sep 13 00:47:13.035617 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:47:13.035767 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:47:13.035778 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:47:13.035789 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:47:13.037553 kernel: virtio_blk virtio5: [vdb] 976 512-byte logical blocks (500 kB/488 KiB) Sep 13 00:47:13.057010 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:47:13.078006 kernel: AES CTR mode by8 optimization enabled Sep 13 00:47:13.089403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 13 00:47:13.147704 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (442) Sep 13 00:47:13.147736 kernel: libata version 3.00 loaded. Sep 13 00:47:13.147749 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:47:13.147928 kernel: ACPI: bus type USB registered Sep 13 00:47:13.147940 kernel: usbcore: registered new interface driver usbfs Sep 13 00:47:13.147960 kernel: usbcore: registered new interface driver hub Sep 13 00:47:13.148004 kernel: usbcore: registered new device driver usb Sep 13 00:47:13.148022 kernel: scsi host1: ata_piix Sep 13 00:47:13.148194 kernel: scsi host2: ata_piix Sep 13 00:47:13.148337 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:47:13.148350 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:47:13.148361 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Sep 13 00:47:13.149202 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 13 00:47:13.163184 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 13 00:47:13.174849 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 13 00:47:13.181835 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:47:13.186155 systemd[1]: Starting disk-uuid.service... Sep 13 00:47:13.201300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:47:13.204861 disk-uuid[504]: Primary Header is updated. Sep 13 00:47:13.204861 disk-uuid[504]: Secondary Entries is updated. Sep 13 00:47:13.204861 disk-uuid[504]: Secondary Header is updated. Sep 13 00:47:13.284095 kernel: ehci-pci: EHCI PCI platform driver Sep 13 00:47:13.297008 kernel: uhci_hcd: USB Universal Host Controller Interface driver Sep 13 00:47:13.320436 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:47:13.323609 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:47:13.323745 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:47:13.323849 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Sep 13 00:47:13.323959 kernel: hub 1-0:1.0: USB hub found Sep 13 00:47:13.324186 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:47:14.221730 disk-uuid[505]: The operation has completed successfully. Sep 13 00:47:14.222448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:47:14.266127 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:47:14.266233 systemd[1]: Finished disk-uuid.service. Sep 13 00:47:14.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.268253 systemd[1]: Starting verity-setup.service... Sep 13 00:47:14.288065 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:47:14.343187 systemd[1]: Found device dev-mapper-usr.device. Sep 13 00:47:14.346061 systemd[1]: Mounting sysusr-usr.mount... Sep 13 00:47:14.346874 systemd[1]: Finished verity-setup.service. Sep 13 00:47:14.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.450000 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 13 00:47:14.448089 systemd[1]: Mounted sysusr-usr.mount. Sep 13 00:47:14.448647 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 13 00:47:14.451777 systemd[1]: Starting ignition-setup.service... Sep 13 00:47:14.454041 systemd[1]: Starting parse-ip-for-networkd.service... Sep 13 00:47:14.467565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:47:14.467653 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:47:14.467666 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:47:14.489231 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:47:14.498778 systemd[1]: Finished ignition-setup.service. Sep 13 00:47:14.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.501952 systemd[1]: Starting ignition-fetch-offline.service... Sep 13 00:47:14.660269 systemd[1]: Finished parse-ip-for-networkd.service. Sep 13 00:47:14.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.661000 audit: BPF prog-id=9 op=LOAD Sep 13 00:47:14.662937 systemd[1]: Starting systemd-networkd.service... Sep 13 00:47:14.678177 ignition[601]: Ignition 2.14.0 Sep 13 00:47:14.678904 ignition[601]: Stage: fetch-offline Sep 13 00:47:14.679361 ignition[601]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:14.679885 ignition[601]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:14.683360 ignition[601]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:14.684192 ignition[601]: parsed url from cmdline: "" Sep 13 00:47:14.684281 ignition[601]: no config URL provided Sep 13 00:47:14.685168 ignition[601]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:47:14.685803 ignition[601]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:47:14.686326 ignition[601]: failed to fetch config: resource requires networking Sep 13 00:47:14.687313 ignition[601]: Ignition finished successfully Sep 13 00:47:14.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.690155 systemd[1]: Finished ignition-fetch-offline.service. Sep 13 00:47:14.690960 systemd-networkd[688]: lo: Link UP Sep 13 00:47:14.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.690965 systemd-networkd[688]: lo: Gained carrier Sep 13 00:47:14.691719 systemd-networkd[688]: Enumeration completed Sep 13 00:47:14.692133 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:47:14.692374 systemd[1]: Started systemd-networkd.service. Sep 13 00:47:14.693194 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:47:14.693433 systemd[1]: Reached target network.target. Sep 13 00:47:14.694234 systemd-networkd[688]: eth1: Link UP Sep 13 00:47:14.694239 systemd-networkd[688]: eth1: Gained carrier Sep 13 00:47:14.695605 systemd[1]: Starting ignition-fetch.service... Sep 13 00:47:14.698952 systemd[1]: Starting iscsiuio.service... Sep 13 00:47:14.709510 systemd-networkd[688]: eth0: Link UP Sep 13 00:47:14.709519 systemd-networkd[688]: eth0: Gained carrier Sep 13 00:47:14.729298 ignition[690]: Ignition 2.14.0 Sep 13 00:47:14.730217 ignition[690]: Stage: fetch Sep 13 00:47:14.730428 ignition[690]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:14.730458 ignition[690]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:14.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.733256 systemd[1]: Started iscsiuio.service. Sep 13 00:47:14.735195 systemd[1]: Starting iscsid.service... Sep 13 00:47:14.738711 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:14.738929 ignition[690]: parsed url from cmdline: "" Sep 13 00:47:14.738935 ignition[690]: no config URL provided Sep 13 00:47:14.738945 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:47:14.738959 ignition[690]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:47:14.741579 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:47:14.742140 systemd-networkd[688]: eth0: DHCPv4 address 146.190.148.102/20, gateway 146.190.144.1 acquired from 169.254.169.253 Sep 13 00:47:14.745518 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:47:14.745518 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 13 00:47:14.745518 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 13 00:47:14.745518 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 13 00:47:14.745518 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 13 00:47:14.745518 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 13 00:47:14.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.747835 ignition[690]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 13 00:47:14.746103 systemd[1]: Started iscsid.service. Sep 13 00:47:14.748432 systemd[1]: Starting dracut-initqueue.service... Sep 13 00:47:14.748610 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Sep 13 00:47:14.770937 systemd[1]: Finished dracut-initqueue.service. Sep 13 00:47:14.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.771650 systemd[1]: Reached target remote-fs-pre.target. Sep 13 00:47:14.772436 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:47:14.773396 systemd[1]: Reached target remote-fs.target. Sep 13 00:47:14.775716 systemd[1]: Starting dracut-pre-mount.service... Sep 13 00:47:14.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.791115 systemd[1]: Finished dracut-pre-mount.service. Sep 13 00:47:14.948058 ignition[690]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Sep 13 00:47:14.974824 ignition[690]: GET result: OK Sep 13 00:47:14.975159 ignition[690]: parsing config with SHA512: 3a603736259926afff94c973a5fb6784addbe7921fadb3ecac4f0fcec297a7eb77401f386cbe686104be696a44961c55d589659c31678114be2f2a28639d57b5 Sep 13 00:47:14.988206 unknown[690]: fetched base config from "system" Sep 13 00:47:14.988894 unknown[690]: fetched base config from "system" Sep 13 00:47:14.989434 unknown[690]: fetched user config from "digitalocean" Sep 13 00:47:14.990359 ignition[690]: fetch: fetch complete Sep 13 00:47:14.990753 ignition[690]: fetch: fetch passed Sep 13 00:47:14.991167 ignition[690]: Ignition finished successfully Sep 13 00:47:14.993596 systemd[1]: Finished ignition-fetch.service. Sep 13 00:47:14.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:14.995068 systemd[1]: Starting ignition-kargs.service... Sep 13 00:47:15.020789 ignition[713]: Ignition 2.14.0 Sep 13 00:47:15.020806 ignition[713]: Stage: kargs Sep 13 00:47:15.021049 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:15.021081 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:15.024644 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:15.027687 ignition[713]: kargs: kargs passed Sep 13 00:47:15.027807 ignition[713]: Ignition finished successfully Sep 13 00:47:15.029047 systemd[1]: Finished ignition-kargs.service. Sep 13 00:47:15.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.030687 systemd[1]: Starting ignition-disks.service... Sep 13 00:47:15.041319 ignition[719]: Ignition 2.14.0 Sep 13 00:47:15.041330 ignition[719]: Stage: disks Sep 13 00:47:15.041462 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:15.041483 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:15.042854 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:15.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.045371 systemd[1]: Finished ignition-disks.service. Sep 13 00:47:15.043773 ignition[719]: disks: disks passed Sep 13 00:47:15.046305 systemd[1]: Reached target initrd-root-device.target. Sep 13 00:47:15.043818 ignition[719]: Ignition finished successfully Sep 13 00:47:15.046785 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:47:15.047266 systemd[1]: Reached target local-fs.target. Sep 13 00:47:15.048060 systemd[1]: Reached target sysinit.target. Sep 13 00:47:15.048714 systemd[1]: Reached target basic.target. Sep 13 00:47:15.051065 systemd[1]: Starting systemd-fsck-root.service... Sep 13 00:47:15.072299 systemd-fsck[726]: ROOT: clean, 629/553520 files, 56028/553472 blocks Sep 13 00:47:15.077042 systemd[1]: Finished systemd-fsck-root.service. Sep 13 00:47:15.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.079132 systemd[1]: Mounting sysroot.mount... Sep 13 00:47:15.094052 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 13 00:47:15.094584 systemd[1]: Mounted sysroot.mount. Sep 13 00:47:15.095308 systemd[1]: Reached target initrd-root-fs.target. Sep 13 00:47:15.097962 systemd[1]: Mounting sysroot-usr.mount... Sep 13 00:47:15.100083 systemd[1]: Starting flatcar-digitalocean-network.service... Sep 13 00:47:15.102775 systemd[1]: Starting flatcar-metadata-hostname.service... Sep 13 00:47:15.103511 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:47:15.103570 systemd[1]: Reached target ignition-diskful.target. Sep 13 00:47:15.111359 systemd[1]: Mounted sysroot-usr.mount. Sep 13 00:47:15.116712 systemd[1]: Starting initrd-setup-root.service... Sep 13 00:47:15.132690 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:47:15.149707 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:47:15.160908 initrd-setup-root[754]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:47:15.176941 initrd-setup-root[764]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:47:15.252124 coreos-metadata[732]: Sep 13 00:47:15.251 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:47:15.269692 coreos-metadata[732]: Sep 13 00:47:15.269 INFO Fetch successful Sep 13 00:47:15.279850 systemd[1]: Finished initrd-setup-root.service. Sep 13 00:47:15.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.281393 systemd[1]: Starting ignition-mount.service... Sep 13 00:47:15.282754 systemd[1]: Starting sysroot-boot.service... Sep 13 00:47:15.292183 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:47:15.292293 systemd[1]: Finished flatcar-digitalocean-network.service. Sep 13 00:47:15.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.311036 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Sep 13 00:47:15.317823 coreos-metadata[733]: Sep 13 00:47:15.317 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:47:15.326726 ignition[785]: INFO : Ignition 2.14.0 Sep 13 00:47:15.326726 ignition[785]: INFO : Stage: mount Sep 13 00:47:15.328489 ignition[785]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:15.328489 ignition[785]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:15.330764 coreos-metadata[733]: Sep 13 00:47:15.330 INFO Fetch successful Sep 13 00:47:15.332305 coreos-metadata[733]: Sep 13 00:47:15.332 INFO wrote hostname ci-3510.3.8-n-17df7d76e4 to /sysroot/etc/hostname Sep 13 00:47:15.334040 systemd[1]: Finished flatcar-metadata-hostname.service. Sep 13 00:47:15.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.336012 ignition[785]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:15.338753 ignition[785]: INFO : mount: mount passed Sep 13 00:47:15.338753 ignition[785]: INFO : Ignition finished successfully Sep 13 00:47:15.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.340929 systemd[1]: Finished ignition-mount.service. Sep 13 00:47:15.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:15.345232 systemd[1]: Finished sysroot-boot.service. Sep 13 00:47:15.365483 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 13 00:47:15.379031 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (793) Sep 13 00:47:15.393257 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:47:15.393361 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:47:15.393397 kernel: BTRFS info (device vda6): has skinny extents Sep 13 00:47:15.399149 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 13 00:47:15.401643 systemd[1]: Starting ignition-files.service... Sep 13 00:47:15.426549 ignition[813]: INFO : Ignition 2.14.0 Sep 13 00:47:15.426549 ignition[813]: INFO : Stage: files Sep 13 00:47:15.428162 ignition[813]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:15.428162 ignition[813]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:15.432118 ignition[813]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:15.433355 ignition[813]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:47:15.434886 ignition[813]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:47:15.434886 ignition[813]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:47:15.437993 ignition[813]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:47:15.438536 ignition[813]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:47:15.439242 ignition[813]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:47:15.439006 unknown[813]: wrote ssh authorized keys file for user: core Sep 13 00:47:15.440427 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:47:15.440427 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 00:47:15.481556 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 00:47:15.673406 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 00:47:15.674633 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:47:15.674633 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 00:47:15.876773 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:47:16.012925 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:47:16.013835 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:47:16.014817 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:47:16.015513 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:47:16.016414 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:47:16.017097 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:47:16.017097 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:47:16.017097 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:47:16.017097 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:47:16.017097 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:47:16.020075 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:47:16.020075 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:47:16.020075 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:47:16.020075 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:47:16.020075 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 00:47:16.160525 systemd-networkd[688]: eth0: Gained IPv6LL Sep 13 00:47:16.161187 systemd-networkd[688]: eth1: Gained IPv6LL Sep 13 00:47:16.379170 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:47:17.084840 ignition[813]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 00:47:17.085930 ignition[813]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:47:17.086475 ignition[813]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Sep 13 00:47:17.086997 ignition[813]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Sep 13 00:47:17.088018 ignition[813]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:47:17.089332 ignition[813]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:47:17.090056 ignition[813]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Sep 13 00:47:17.090634 ignition[813]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:47:17.098188 ignition[813]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Sep 13 00:47:17.098188 ignition[813]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:47:17.099375 ignition[813]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:47:17.105485 ignition[813]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:47:17.106174 ignition[813]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:47:17.106174 ignition[813]: INFO : files: files passed Sep 13 00:47:17.106174 ignition[813]: INFO : Ignition finished successfully Sep 13 00:47:17.108182 systemd[1]: Finished ignition-files.service. Sep 13 00:47:17.113459 kernel: kauditd_printk_skb: 27 callbacks suppressed Sep 13 00:47:17.113497 kernel: audit: type=1130 audit(1757724437.107:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.109546 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 13 00:47:17.113799 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 13 00:47:17.114887 systemd[1]: Starting ignition-quench.service... Sep 13 00:47:17.119409 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:47:17.126100 kernel: audit: type=1130 audit(1757724437.120:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.126158 kernel: audit: type=1131 audit(1757724437.120:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.119505 systemd[1]: Finished ignition-quench.service. Sep 13 00:47:17.130015 kernel: audit: type=1130 audit(1757724437.125:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.130046 initrd-setup-root-after-ignition[838]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:47:17.121594 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 13 00:47:17.126533 systemd[1]: Reached target ignition-complete.target. Sep 13 00:47:17.132942 systemd[1]: Starting initrd-parse-etc.service... Sep 13 00:47:17.154795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:47:17.154941 systemd[1]: Finished initrd-parse-etc.service. Sep 13 00:47:17.156034 systemd[1]: Reached target initrd-fs.target. Sep 13 00:47:17.161659 kernel: audit: type=1130 audit(1757724437.154:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.161689 kernel: audit: type=1131 audit(1757724437.154:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.161369 systemd[1]: Reached target initrd.target. Sep 13 00:47:17.161899 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 13 00:47:17.162911 systemd[1]: Starting dracut-pre-pivot.service... Sep 13 00:47:17.175662 systemd[1]: Finished dracut-pre-pivot.service. Sep 13 00:47:17.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.178993 kernel: audit: type=1130 audit(1757724437.175:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.179255 systemd[1]: Starting initrd-cleanup.service... Sep 13 00:47:17.189943 systemd[1]: Stopped target nss-lookup.target. Sep 13 00:47:17.190862 systemd[1]: Stopped target remote-cryptsetup.target. Sep 13 00:47:17.191761 systemd[1]: Stopped target timers.target. Sep 13 00:47:17.198770 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:47:17.198988 systemd[1]: Stopped dracut-pre-pivot.service. Sep 13 00:47:17.202485 kernel: audit: type=1131 audit(1757724437.198:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.200120 systemd[1]: Stopped target initrd.target. Sep 13 00:47:17.202813 systemd[1]: Stopped target basic.target. Sep 13 00:47:17.203369 systemd[1]: Stopped target ignition-complete.target. Sep 13 00:47:17.204062 systemd[1]: Stopped target ignition-diskful.target. Sep 13 00:47:17.204586 systemd[1]: Stopped target initrd-root-device.target. Sep 13 00:47:17.205189 systemd[1]: Stopped target remote-fs.target. Sep 13 00:47:17.205706 systemd[1]: Stopped target remote-fs-pre.target. Sep 13 00:47:17.206286 systemd[1]: Stopped target sysinit.target. Sep 13 00:47:17.206861 systemd[1]: Stopped target local-fs.target. Sep 13 00:47:17.207384 systemd[1]: Stopped target local-fs-pre.target. Sep 13 00:47:17.208009 systemd[1]: Stopped target swap.target. Sep 13 00:47:17.208473 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:47:17.211734 kernel: audit: type=1131 audit(1757724437.208:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.208620 systemd[1]: Stopped dracut-pre-mount.service. Sep 13 00:47:17.209219 systemd[1]: Stopped target cryptsetup.target. Sep 13 00:47:17.215359 kernel: audit: type=1131 audit(1757724437.211:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.212175 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:47:17.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.212349 systemd[1]: Stopped dracut-initqueue.service. Sep 13 00:47:17.212986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:47:17.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.213132 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 13 00:47:17.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.215821 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:47:17.216027 systemd[1]: Stopped ignition-files.service. Sep 13 00:47:17.216800 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:47:17.216921 systemd[1]: Stopped flatcar-metadata-hostname.service. Sep 13 00:47:17.218535 systemd[1]: Stopping ignition-mount.service... Sep 13 00:47:17.222220 systemd[1]: Stopping iscsiuio.service... Sep 13 00:47:17.223991 systemd[1]: Stopping sysroot-boot.service... Sep 13 00:47:17.224726 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:47:17.225489 systemd[1]: Stopped systemd-udev-trigger.service. Sep 13 00:47:17.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.226426 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:47:17.227206 systemd[1]: Stopped dracut-pre-trigger.service. Sep 13 00:47:17.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.230684 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 13 00:47:17.231481 systemd[1]: Stopped iscsiuio.service. Sep 13 00:47:17.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.234167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:47:17.234694 systemd[1]: Finished initrd-cleanup.service. Sep 13 00:47:17.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.244917 ignition[851]: INFO : Ignition 2.14.0 Sep 13 00:47:17.246021 ignition[851]: INFO : Stage: umount Sep 13 00:47:17.246021 ignition[851]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Sep 13 00:47:17.246021 ignition[851]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Sep 13 00:47:17.249713 ignition[851]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:47:17.251924 ignition[851]: INFO : umount: umount passed Sep 13 00:47:17.252463 ignition[851]: INFO : Ignition finished successfully Sep 13 00:47:17.253170 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:47:17.253705 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:47:17.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.253822 systemd[1]: Stopped sysroot-boot.service. Sep 13 00:47:17.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.254451 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:47:17.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.254538 systemd[1]: Stopped ignition-mount.service. Sep 13 00:47:17.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.255091 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:47:17.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.255136 systemd[1]: Stopped ignition-disks.service. Sep 13 00:47:17.255514 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:47:17.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.255552 systemd[1]: Stopped ignition-kargs.service. Sep 13 00:47:17.256146 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:47:17.256184 systemd[1]: Stopped ignition-fetch.service. Sep 13 00:47:17.256678 systemd[1]: Stopped target network.target. Sep 13 00:47:17.257304 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:47:17.257348 systemd[1]: Stopped ignition-fetch-offline.service. Sep 13 00:47:17.257893 systemd[1]: Stopped target paths.target. Sep 13 00:47:17.258425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:47:17.262060 systemd[1]: Stopped systemd-ask-password-console.path. Sep 13 00:47:17.262762 systemd[1]: Stopped target slices.target. Sep 13 00:47:17.263174 systemd[1]: Stopped target sockets.target. Sep 13 00:47:17.263751 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:47:17.263784 systemd[1]: Closed iscsid.socket. Sep 13 00:47:17.264335 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:47:17.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.264375 systemd[1]: Closed iscsiuio.socket. Sep 13 00:47:17.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.264901 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:47:17.264950 systemd[1]: Stopped ignition-setup.service. Sep 13 00:47:17.265436 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:47:17.265473 systemd[1]: Stopped initrd-setup-root.service. Sep 13 00:47:17.266404 systemd[1]: Stopping systemd-networkd.service... Sep 13 00:47:17.267143 systemd[1]: Stopping systemd-resolved.service... Sep 13 00:47:17.271408 systemd-networkd[688]: eth1: DHCPv6 lease lost Sep 13 00:47:17.275078 systemd-networkd[688]: eth0: DHCPv6 lease lost Sep 13 00:47:17.276510 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:47:17.276613 systemd[1]: Stopped systemd-resolved.service. Sep 13 00:47:17.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.277925 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:47:17.278073 systemd[1]: Stopped systemd-networkd.service. Sep 13 00:47:17.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.279303 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:47:17.279343 systemd[1]: Closed systemd-networkd.socket. Sep 13 00:47:17.279000 audit: BPF prog-id=6 op=UNLOAD Sep 13 00:47:17.280000 audit: BPF prog-id=9 op=UNLOAD Sep 13 00:47:17.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.281055 systemd[1]: Stopping network-cleanup.service... Sep 13 00:47:17.281412 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:47:17.281474 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 13 00:47:17.281836 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:47:17.281876 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:47:17.282342 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:47:17.282384 systemd[1]: Stopped systemd-modules-load.service. Sep 13 00:47:17.287784 systemd[1]: Stopping systemd-udevd.service... Sep 13 00:47:17.289634 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 13 00:47:17.293304 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:47:17.293410 systemd[1]: Stopped network-cleanup.service. Sep 13 00:47:17.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.294998 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:47:17.295144 systemd[1]: Stopped systemd-udevd.service. Sep 13 00:47:17.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.296153 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:47:17.296208 systemd[1]: Closed systemd-udevd-control.socket. Sep 13 00:47:17.297048 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:47:17.297082 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 13 00:47:17.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.297539 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:47:17.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.297582 systemd[1]: Stopped dracut-pre-udev.service. Sep 13 00:47:17.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.298460 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:47:17.298507 systemd[1]: Stopped dracut-cmdline.service. Sep 13 00:47:17.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.299115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:47:17.299183 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 13 00:47:17.306509 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 13 00:47:17.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.307437 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:47:17.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.307498 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Sep 13 00:47:17.309078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:47:17.309126 systemd[1]: Stopped kmod-static-nodes.service. Sep 13 00:47:17.309967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:47:17.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.310026 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 13 00:47:17.313346 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 13 00:47:17.313852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:47:17.313940 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 13 00:47:17.314452 systemd[1]: Reached target initrd-switch-root.target. Sep 13 00:47:17.315799 systemd[1]: Starting initrd-switch-root.service... Sep 13 00:47:17.329550 systemd[1]: Switching root. Sep 13 00:47:17.350265 iscsid[698]: iscsid shutting down. Sep 13 00:47:17.351034 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 13 00:47:17.351124 systemd-journald[184]: Journal stopped Sep 13 00:47:20.754946 kernel: SELinux: Class mctp_socket not defined in policy. Sep 13 00:47:20.755026 kernel: SELinux: Class anon_inode not defined in policy. Sep 13 00:47:20.755045 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 13 00:47:20.755057 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:47:20.755069 kernel: SELinux: policy capability open_perms=1 Sep 13 00:47:20.755086 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:47:20.755099 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:47:20.755114 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:47:20.755125 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:47:20.755136 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:47:20.755147 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:47:20.755167 systemd[1]: Successfully loaded SELinux policy in 42.923ms. Sep 13 00:47:20.755191 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.180ms. Sep 13 00:47:20.755205 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 13 00:47:20.755220 systemd[1]: Detected virtualization kvm. Sep 13 00:47:20.755231 systemd[1]: Detected architecture x86-64. Sep 13 00:47:20.755244 systemd[1]: Detected first boot. Sep 13 00:47:20.755256 systemd[1]: Hostname set to . Sep 13 00:47:20.755271 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:47:20.755283 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 13 00:47:20.755296 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:47:20.755310 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:20.755327 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:20.755342 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:20.755360 systemd[1]: iscsid.service: Deactivated successfully. Sep 13 00:47:20.755375 systemd[1]: Stopped iscsid.service. Sep 13 00:47:20.755391 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 00:47:20.755403 systemd[1]: Stopped initrd-switch-root.service. Sep 13 00:47:20.755415 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 00:47:20.755427 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 13 00:47:20.755440 systemd[1]: Created slice system-addon\x2drun.slice. Sep 13 00:47:20.755456 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Sep 13 00:47:20.755468 systemd[1]: Created slice system-getty.slice. Sep 13 00:47:20.755480 systemd[1]: Created slice system-modprobe.slice. Sep 13 00:47:20.755495 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 13 00:47:20.755508 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 13 00:47:20.755521 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 13 00:47:20.755533 systemd[1]: Created slice user.slice. Sep 13 00:47:20.755546 systemd[1]: Started systemd-ask-password-console.path. Sep 13 00:47:20.755557 systemd[1]: Started systemd-ask-password-wall.path. Sep 13 00:47:20.755570 systemd[1]: Set up automount boot.automount. Sep 13 00:47:20.755585 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 13 00:47:20.755597 systemd[1]: Stopped target initrd-switch-root.target. Sep 13 00:47:20.755609 systemd[1]: Stopped target initrd-fs.target. Sep 13 00:47:20.755621 systemd[1]: Stopped target initrd-root-fs.target. Sep 13 00:47:20.755634 systemd[1]: Reached target integritysetup.target. Sep 13 00:47:20.755645 systemd[1]: Reached target remote-cryptsetup.target. Sep 13 00:47:20.755658 systemd[1]: Reached target remote-fs.target. Sep 13 00:47:20.755670 systemd[1]: Reached target slices.target. Sep 13 00:47:20.755682 systemd[1]: Reached target swap.target. Sep 13 00:47:20.755697 systemd[1]: Reached target torcx.target. Sep 13 00:47:20.755722 systemd[1]: Reached target veritysetup.target. Sep 13 00:47:20.755740 systemd[1]: Listening on systemd-coredump.socket. Sep 13 00:47:20.755756 systemd[1]: Listening on systemd-initctl.socket. Sep 13 00:47:20.755769 systemd[1]: Listening on systemd-networkd.socket. Sep 13 00:47:20.755781 systemd[1]: Listening on systemd-udevd-control.socket. Sep 13 00:47:20.755796 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 13 00:47:20.755808 systemd[1]: Listening on systemd-userdbd.socket. Sep 13 00:47:20.755820 systemd[1]: Mounting dev-hugepages.mount... Sep 13 00:47:20.755836 systemd[1]: Mounting dev-mqueue.mount... Sep 13 00:47:20.755848 systemd[1]: Mounting media.mount... Sep 13 00:47:20.755860 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:20.755872 systemd[1]: Mounting sys-kernel-debug.mount... Sep 13 00:47:20.755884 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 13 00:47:20.755896 systemd[1]: Mounting tmp.mount... Sep 13 00:47:20.755908 systemd[1]: Starting flatcar-tmpfiles.service... Sep 13 00:47:20.755920 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:20.755932 systemd[1]: Starting kmod-static-nodes.service... Sep 13 00:47:20.755948 systemd[1]: Starting modprobe@configfs.service... Sep 13 00:47:20.755961 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:20.756091 systemd[1]: Starting modprobe@drm.service... Sep 13 00:47:20.756117 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:20.756130 systemd[1]: Starting modprobe@fuse.service... Sep 13 00:47:20.756144 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:20.756158 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:47:20.756170 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 00:47:20.756183 systemd[1]: Stopped systemd-fsck-root.service. Sep 13 00:47:20.756200 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 00:47:20.756213 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 00:47:20.756226 systemd[1]: Stopped systemd-journald.service. Sep 13 00:47:20.756237 systemd[1]: Starting systemd-journald.service... Sep 13 00:47:20.756250 systemd[1]: Starting systemd-modules-load.service... Sep 13 00:47:20.756263 systemd[1]: Starting systemd-network-generator.service... Sep 13 00:47:20.756275 kernel: fuse: init (API version 7.34) Sep 13 00:47:20.756288 systemd[1]: Starting systemd-remount-fs.service... Sep 13 00:47:20.756300 systemd[1]: Starting systemd-udev-trigger.service... Sep 13 00:47:20.756313 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 00:47:20.756328 systemd[1]: Stopped verity-setup.service. Sep 13 00:47:20.756340 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:20.756352 systemd[1]: Mounted dev-hugepages.mount. Sep 13 00:47:20.756364 systemd[1]: Mounted dev-mqueue.mount. Sep 13 00:47:20.756376 systemd[1]: Mounted media.mount. Sep 13 00:47:20.756389 systemd[1]: Mounted sys-kernel-debug.mount. Sep 13 00:47:20.756402 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 13 00:47:20.756415 systemd[1]: Mounted tmp.mount. Sep 13 00:47:20.756430 systemd[1]: Finished kmod-static-nodes.service. Sep 13 00:47:20.756442 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:47:20.756455 systemd[1]: Finished modprobe@configfs.service. Sep 13 00:47:20.756467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:20.756480 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:20.756496 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:47:20.756508 systemd[1]: Finished modprobe@drm.service. Sep 13 00:47:20.756520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:20.756532 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:20.756544 systemd[1]: Finished systemd-modules-load.service. Sep 13 00:47:20.756556 systemd[1]: Finished systemd-network-generator.service. Sep 13 00:47:20.756569 systemd[1]: Finished systemd-remount-fs.service. Sep 13 00:47:20.756581 systemd[1]: Reached target network-pre.target. Sep 13 00:47:20.756594 systemd[1]: Mounting sys-kernel-config.mount... Sep 13 00:47:20.756609 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:47:20.756621 kernel: loop: module loaded Sep 13 00:47:20.756633 systemd[1]: Starting systemd-hwdb-update.service... Sep 13 00:47:20.756646 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:20.756658 systemd[1]: Starting systemd-random-seed.service... Sep 13 00:47:20.756675 systemd-journald[949]: Journal started Sep 13 00:47:20.756736 systemd-journald[949]: Runtime Journal (/run/log/journal/1c90a5770b91439cbba4a4bcbf6b3e5e) is 4.9M, max 39.5M, 34.5M free. Sep 13 00:47:17.483000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:47:17.532000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:47:17.532000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 13 00:47:17.532000 audit: BPF prog-id=10 op=LOAD Sep 13 00:47:17.532000 audit: BPF prog-id=10 op=UNLOAD Sep 13 00:47:17.533000 audit: BPF prog-id=11 op=LOAD Sep 13 00:47:17.533000 audit: BPF prog-id=11 op=UNLOAD Sep 13 00:47:17.632000 audit[883]: AVC avc: denied { associate } for pid=883 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 13 00:47:17.632000 audit[883]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878d4 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=866 pid=883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:17.632000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:47:17.634000 audit[883]: AVC avc: denied { associate } for pid=883 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 13 00:47:17.634000 audit[883]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b9 a2=1ed a3=0 items=2 ppid=866 pid=883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:17.634000 audit: CWD cwd="/" Sep 13 00:47:17.634000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:17.634000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:17.634000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 13 00:47:20.569000 audit: BPF prog-id=12 op=LOAD Sep 13 00:47:20.569000 audit: BPF prog-id=3 op=UNLOAD Sep 13 00:47:20.570000 audit: BPF prog-id=13 op=LOAD Sep 13 00:47:20.570000 audit: BPF prog-id=14 op=LOAD Sep 13 00:47:20.570000 audit: BPF prog-id=4 op=UNLOAD Sep 13 00:47:20.570000 audit: BPF prog-id=5 op=UNLOAD Sep 13 00:47:20.571000 audit: BPF prog-id=15 op=LOAD Sep 13 00:47:20.571000 audit: BPF prog-id=12 op=UNLOAD Sep 13 00:47:20.571000 audit: BPF prog-id=16 op=LOAD Sep 13 00:47:20.571000 audit: BPF prog-id=17 op=LOAD Sep 13 00:47:20.571000 audit: BPF prog-id=13 op=UNLOAD Sep 13 00:47:20.571000 audit: BPF prog-id=14 op=UNLOAD Sep 13 00:47:20.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.585000 audit: BPF prog-id=15 op=UNLOAD Sep 13 00:47:20.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.676000 audit: BPF prog-id=18 op=LOAD Sep 13 00:47:20.676000 audit: BPF prog-id=19 op=LOAD Sep 13 00:47:20.677000 audit: BPF prog-id=20 op=LOAD Sep 13 00:47:20.677000 audit: BPF prog-id=16 op=UNLOAD Sep 13 00:47:20.677000 audit: BPF prog-id=17 op=UNLOAD Sep 13 00:47:20.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.744000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 13 00:47:20.744000 audit[949]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffa5f28930 a2=4000 a3=7fffa5f289cc items=0 ppid=1 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:20.744000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 13 00:47:17.629566 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:20.568859 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:47:20.780857 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:47:20.780885 systemd[1]: Started systemd-journald.service. Sep 13 00:47:20.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:17.630156 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:47:20.568874 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 13 00:47:17.630193 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:47:20.573325 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 00:47:17.630244 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 13 00:47:20.768203 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:47:17.630261 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 13 00:47:20.768336 systemd[1]: Finished modprobe@fuse.service. Sep 13 00:47:17.630325 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 13 00:47:20.769190 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:17.630349 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 13 00:47:20.769325 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:17.630695 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 13 00:47:20.772178 systemd[1]: Mounted sys-kernel-config.mount. Sep 13 00:47:17.630754 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 13 00:47:20.774080 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 13 00:47:17.630767 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 13 00:47:20.779047 systemd[1]: Starting systemd-journal-flush.service... Sep 13 00:47:17.632456 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 13 00:47:20.779451 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:17.632498 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 13 00:47:17.632520 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 13 00:47:17.632549 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 13 00:47:17.632572 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 13 00:47:17.632587 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:17Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 13 00:47:20.136902 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:20.137224 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:20.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.137372 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:20.783701 systemd[1]: Finished systemd-random-seed.service. Sep 13 00:47:20.137584 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 13 00:47:20.784353 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 13 00:47:20.137638 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 13 00:47:20.785110 systemd[1]: Reached target first-boot-complete.target. Sep 13 00:47:20.137712 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-09-13T00:47:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 13 00:47:20.791193 systemd-journald[949]: Time spent on flushing to /var/log/journal/1c90a5770b91439cbba4a4bcbf6b3e5e is 47.311ms for 1155 entries. Sep 13 00:47:20.791193 systemd-journald[949]: System Journal (/var/log/journal/1c90a5770b91439cbba4a4bcbf6b3e5e) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:47:20.850509 systemd-journald[949]: Received client request to flush runtime journal. Sep 13 00:47:20.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.794228 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:47:20.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.851499 systemd[1]: Finished systemd-journal-flush.service. Sep 13 00:47:20.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.862869 systemd[1]: Finished flatcar-tmpfiles.service. Sep 13 00:47:20.864848 systemd[1]: Starting systemd-sysusers.service... Sep 13 00:47:20.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.871847 systemd[1]: Finished systemd-udev-trigger.service. Sep 13 00:47:20.873690 systemd[1]: Starting systemd-udev-settle.service... Sep 13 00:47:20.884013 udevadm[994]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:47:20.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.896278 systemd[1]: Finished systemd-sysusers.service. Sep 13 00:47:20.898013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 13 00:47:20.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:20.926401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 13 00:47:21.467024 systemd[1]: Finished systemd-hwdb-update.service. Sep 13 00:47:21.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.467000 audit: BPF prog-id=21 op=LOAD Sep 13 00:47:21.467000 audit: BPF prog-id=22 op=LOAD Sep 13 00:47:21.467000 audit: BPF prog-id=7 op=UNLOAD Sep 13 00:47:21.467000 audit: BPF prog-id=8 op=UNLOAD Sep 13 00:47:21.469335 systemd[1]: Starting systemd-udevd.service... Sep 13 00:47:21.490064 systemd-udevd[997]: Using default interface naming scheme 'v252'. Sep 13 00:47:21.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.514000 audit: BPF prog-id=23 op=LOAD Sep 13 00:47:21.513930 systemd[1]: Started systemd-udevd.service. Sep 13 00:47:21.516264 systemd[1]: Starting systemd-networkd.service... Sep 13 00:47:21.521000 audit: BPF prog-id=24 op=LOAD Sep 13 00:47:21.521000 audit: BPF prog-id=25 op=LOAD Sep 13 00:47:21.521000 audit: BPF prog-id=26 op=LOAD Sep 13 00:47:21.523183 systemd[1]: Starting systemd-userdbd.service... Sep 13 00:47:21.573813 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Sep 13 00:47:21.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.579235 systemd[1]: Started systemd-userdbd.service. Sep 13 00:47:21.596529 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:21.597782 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:21.599314 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:21.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.603182 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:21.603554 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:47:21.603634 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:47:21.604648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:21.604810 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:21.607344 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:21.607467 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:21.608139 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:21.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.610240 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:21.610366 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:21.610815 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:21.653999 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:47:21.660017 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:47:21.691883 systemd-networkd[1003]: lo: Link UP Sep 13 00:47:21.691893 systemd-networkd[1003]: lo: Gained carrier Sep 13 00:47:21.692927 systemd-networkd[1003]: Enumeration completed Sep 13 00:47:21.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.693041 systemd-networkd[1003]: eth1: Configuring with /run/systemd/network/10-36:5c:a0:dd:d3:7c.network. Sep 13 00:47:21.693066 systemd[1]: Started systemd-networkd.service. Sep 13 00:47:21.694315 systemd-networkd[1003]: eth0: Configuring with /run/systemd/network/10-a2:d8:7e:3b:3f:7c.network. Sep 13 00:47:21.694976 systemd-networkd[1003]: eth1: Link UP Sep 13 00:47:21.694993 systemd-networkd[1003]: eth1: Gained carrier Sep 13 00:47:21.699304 systemd-networkd[1003]: eth0: Link UP Sep 13 00:47:21.699316 systemd-networkd[1003]: eth0: Gained carrier Sep 13 00:47:21.702455 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:21.702479 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:21.721619 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 13 00:47:21.712000 audit[1008]: AVC avc: denied { confidentiality } for pid=1008 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Sep 13 00:47:21.712000 audit[1008]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56140e16fdd0 a1=338ec a2=7f81e45dabc5 a3=5 items=110 ppid=997 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:21.712000 audit: CWD cwd="/" Sep 13 00:47:21.712000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=1 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=2 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=3 name=(null) inode=13264 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=4 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=5 name=(null) inode=13265 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=6 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=7 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=8 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=9 name=(null) inode=13267 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=10 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=11 name=(null) inode=13268 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=12 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=13 name=(null) inode=13269 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=14 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=15 name=(null) inode=13270 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=16 name=(null) inode=13266 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=17 name=(null) inode=13271 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=18 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=19 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=20 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=21 name=(null) inode=13273 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=22 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=23 name=(null) inode=13274 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=24 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=25 name=(null) inode=13275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=26 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=27 name=(null) inode=13276 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=28 name=(null) inode=13272 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=29 name=(null) inode=13277 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=30 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=31 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=32 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=33 name=(null) inode=13279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=34 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=35 name=(null) inode=13280 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=36 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=37 name=(null) inode=13281 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=38 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=39 name=(null) inode=13282 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=40 name=(null) inode=13278 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=41 name=(null) inode=13283 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=42 name=(null) inode=13263 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=43 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=44 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=45 name=(null) inode=13285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=46 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=47 name=(null) inode=13286 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=48 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=49 name=(null) inode=13287 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=50 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=51 name=(null) inode=13288 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=52 name=(null) inode=13284 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=53 name=(null) inode=13289 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=55 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=56 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=57 name=(null) inode=13291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=58 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=59 name=(null) inode=13292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=60 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=61 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=62 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=63 name=(null) inode=13294 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=64 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=65 name=(null) inode=13295 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=66 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=67 name=(null) inode=13296 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=68 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=69 name=(null) inode=13297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=70 name=(null) inode=13293 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=71 name=(null) inode=13298 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=72 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=73 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=74 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=75 name=(null) inode=13300 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=76 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=77 name=(null) inode=13301 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=78 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=79 name=(null) inode=13302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=80 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=81 name=(null) inode=13303 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=82 name=(null) inode=13299 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=83 name=(null) inode=13304 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=84 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=85 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=86 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=87 name=(null) inode=13306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=88 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=89 name=(null) inode=13307 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=90 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=91 name=(null) inode=13308 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=92 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=93 name=(null) inode=13309 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=94 name=(null) inode=13305 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=95 name=(null) inode=13310 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=96 name=(null) inode=13290 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=97 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=98 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=99 name=(null) inode=13312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=100 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=101 name=(null) inode=14337 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=102 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=103 name=(null) inode=14338 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=104 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=105 name=(null) inode=14339 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=106 name=(null) inode=13311 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=107 name=(null) inode=14340 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PATH item=109 name=(null) inode=14341 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 13 00:47:21.712000 audit: PROCTITLE proctitle="(udev-worker)" Sep 13 00:47:21.776000 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:47:21.779001 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:47:21.783055 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:47:21.882035 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:47:21.904688 systemd[1]: Finished systemd-udev-settle.service. Sep 13 00:47:21.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.907233 systemd[1]: Starting lvm2-activation-early.service... Sep 13 00:47:21.929846 lvm[1035]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:47:21.960119 systemd[1]: Finished lvm2-activation-early.service. Sep 13 00:47:21.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:21.960743 systemd[1]: Reached target cryptsetup.target. Sep 13 00:47:21.962844 systemd[1]: Starting lvm2-activation.service... Sep 13 00:47:21.970303 lvm[1036]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:47:22.001532 systemd[1]: Finished lvm2-activation.service. Sep 13 00:47:22.002385 systemd[1]: Reached target local-fs-pre.target. Sep 13 00:47:22.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.005559 systemd[1]: Mounting media-configdrive.mount... Sep 13 00:47:22.006270 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:47:22.006347 systemd[1]: Reached target machines.target. Sep 13 00:47:22.009379 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 13 00:47:22.024346 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:47:22.025938 systemd[1]: Mounted media-configdrive.mount. Sep 13 00:47:22.026498 systemd[1]: Reached target local-fs.target. Sep 13 00:47:22.029195 systemd[1]: Starting ldconfig.service... Sep 13 00:47:22.030748 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:22.030850 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:22.033247 systemd[1]: Starting systemd-boot-update.service... Sep 13 00:47:22.037451 systemd[1]: Starting systemd-machine-id-commit.service... Sep 13 00:47:22.043291 systemd[1]: Starting systemd-sysext.service... Sep 13 00:47:22.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.047400 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 13 00:47:22.057509 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1042 (bootctl) Sep 13 00:47:22.060494 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 13 00:47:22.066555 systemd[1]: Unmounting usr-share-oem.mount... Sep 13 00:47:22.078559 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 13 00:47:22.078865 systemd[1]: Unmounted usr-share-oem.mount. Sep 13 00:47:22.103092 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 00:47:22.140855 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:47:22.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.142145 systemd[1]: Finished systemd-machine-id-commit.service. Sep 13 00:47:22.143804 kernel: kauditd_printk_skb: 239 callbacks suppressed Sep 13 00:47:22.143901 kernel: audit: type=1130 audit(1757724442.141:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.167012 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:47:22.185019 kernel: loop1: detected capacity change from 0 to 224512 Sep 13 00:47:22.198214 systemd-fsck[1049]: fsck.fat 4.2 (2021-01-31) Sep 13 00:47:22.198214 systemd-fsck[1049]: /dev/vda1: 790 files, 120761/258078 clusters Sep 13 00:47:22.201587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 13 00:47:22.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.203652 systemd[1]: Mounting boot.mount... Sep 13 00:47:22.206668 kernel: audit: type=1130 audit(1757724442.201:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.207556 (sd-sysext)[1052]: Using extensions 'kubernetes'. Sep 13 00:47:22.210153 (sd-sysext)[1052]: Merged extensions into '/usr'. Sep 13 00:47:22.227791 systemd[1]: Mounted boot.mount. Sep 13 00:47:22.238279 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:22.240588 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:22.245694 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:22.248963 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:22.249848 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:22.250119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:22.251077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:22.251484 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:22.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.254605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:22.254808 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:22.256813 kernel: audit: type=1130 audit(1757724442.251:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.256863 kernel: audit: type=1131 audit(1757724442.253:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.256492 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:22.256642 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:22.264021 kernel: audit: type=1130 audit(1757724442.255:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.264452 kernel: audit: type=1131 audit(1757724442.255:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.265727 systemd[1]: Finished systemd-boot-update.service. Sep 13 00:47:22.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.272036 kernel: audit: type=1130 audit(1757724442.264:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.272178 kernel: audit: type=1131 audit(1757724442.264:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.272910 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:22.273071 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:22.281002 kernel: audit: type=1130 audit(1757724442.271:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.483486 ldconfig[1041]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:47:22.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.488312 systemd[1]: Finished ldconfig.service. Sep 13 00:47:22.493030 kernel: audit: type=1130 audit(1757724442.488:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.701396 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:22.703361 systemd[1]: Mounting usr-share-oem.mount... Sep 13 00:47:22.703800 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:22.713669 systemd[1]: Mounted usr-share-oem.mount. Sep 13 00:47:22.715482 systemd[1]: Finished systemd-sysext.service. Sep 13 00:47:22.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:22.718036 systemd[1]: Starting ensure-sysext.service... Sep 13 00:47:22.719677 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 13 00:47:22.730405 systemd[1]: Reloading. Sep 13 00:47:22.737159 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 13 00:47:22.740703 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:47:22.745260 systemd-tmpfiles[1060]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:47:22.752258 systemd-networkd[1003]: eth1: Gained IPv6LL Sep 13 00:47:22.844170 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-09-13T00:47:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:22.844200 /usr/lib/systemd/system-generators/torcx-generator[1079]: time="2025-09-13T00:47:22Z" level=info msg="torcx already run" Sep 13 00:47:22.941779 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:22.941813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:22.962488 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:23.021000 audit: BPF prog-id=27 op=LOAD Sep 13 00:47:23.021000 audit: BPF prog-id=24 op=UNLOAD Sep 13 00:47:23.021000 audit: BPF prog-id=28 op=LOAD Sep 13 00:47:23.021000 audit: BPF prog-id=29 op=LOAD Sep 13 00:47:23.021000 audit: BPF prog-id=25 op=UNLOAD Sep 13 00:47:23.021000 audit: BPF prog-id=26 op=UNLOAD Sep 13 00:47:23.024000 audit: BPF prog-id=30 op=LOAD Sep 13 00:47:23.024000 audit: BPF prog-id=23 op=UNLOAD Sep 13 00:47:23.025000 audit: BPF prog-id=31 op=LOAD Sep 13 00:47:23.025000 audit: BPF prog-id=32 op=LOAD Sep 13 00:47:23.025000 audit: BPF prog-id=21 op=UNLOAD Sep 13 00:47:23.025000 audit: BPF prog-id=22 op=UNLOAD Sep 13 00:47:23.027000 audit: BPF prog-id=33 op=LOAD Sep 13 00:47:23.027000 audit: BPF prog-id=18 op=UNLOAD Sep 13 00:47:23.027000 audit: BPF prog-id=34 op=LOAD Sep 13 00:47:23.027000 audit: BPF prog-id=35 op=LOAD Sep 13 00:47:23.027000 audit: BPF prog-id=19 op=UNLOAD Sep 13 00:47:23.027000 audit: BPF prog-id=20 op=UNLOAD Sep 13 00:47:23.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.031399 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 13 00:47:23.036993 systemd[1]: Starting audit-rules.service... Sep 13 00:47:23.039217 systemd[1]: Starting clean-ca-certificates.service... Sep 13 00:47:23.041390 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 13 00:47:23.044000 audit: BPF prog-id=36 op=LOAD Sep 13 00:47:23.046379 systemd[1]: Starting systemd-resolved.service... Sep 13 00:47:23.046000 audit: BPF prog-id=37 op=LOAD Sep 13 00:47:23.050733 systemd[1]: Starting systemd-timesyncd.service... Sep 13 00:47:23.052655 systemd[1]: Starting systemd-update-utmp.service... Sep 13 00:47:23.064512 systemd[1]: Finished clean-ca-certificates.service. Sep 13 00:47:23.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.065077 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:23.071000 audit[1132]: SYSTEM_BOOT pid=1132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.075126 systemd[1]: Finished systemd-update-utmp.service. Sep 13 00:47:23.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.084242 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.084490 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.086631 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:23.089966 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:23.093459 systemd[1]: Starting modprobe@loop.service... Sep 13 00:47:23.093994 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.094226 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:23.094406 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:23.094514 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.095655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:23.095843 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:23.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.101056 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.101430 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.105281 systemd[1]: Starting modprobe@dm_mod.service... Sep 13 00:47:23.105807 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.106002 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:23.106173 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:23.106274 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.107960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:23.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.108204 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:23.112792 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:47:23.113014 systemd[1]: Finished modprobe@loop.service. Sep 13 00:47:23.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.116458 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.116781 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.120378 systemd[1]: Starting modprobe@drm.service... Sep 13 00:47:23.125205 systemd[1]: Starting modprobe@efi_pstore.service... Sep 13 00:47:23.125756 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.125923 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:23.127810 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 13 00:47:23.128311 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:47:23.128462 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:47:23.129663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:47:23.129839 systemd[1]: Finished modprobe@dm_mod.service. Sep 13 00:47:23.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.131100 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.134084 systemd[1]: Finished ensure-sysext.service. Sep 13 00:47:23.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.142361 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:47:23.142552 systemd[1]: Finished modprobe@drm.service. Sep 13 00:47:23.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.151923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:47:23.152147 systemd[1]: Finished modprobe@efi_pstore.service. Sep 13 00:47:23.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.152777 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:47:23.159917 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 13 00:47:23.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.162115 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 13 00:47:23.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.164073 systemd[1]: Starting systemd-update-done.service... Sep 13 00:47:23.177243 systemd[1]: Finished systemd-update-done.service. Sep 13 00:47:23.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 13 00:47:23.192000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 13 00:47:23.192000 audit[1156]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6da7da20 a2=420 a3=0 items=0 ppid=1127 pid=1156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 13 00:47:23.192000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 13 00:47:23.194305 augenrules[1156]: No rules Sep 13 00:47:23.194520 systemd[1]: Finished audit-rules.service. Sep 13 00:47:23.211892 systemd-resolved[1130]: Positive Trust Anchors: Sep 13 00:47:23.211910 systemd-resolved[1130]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:47:23.211941 systemd-resolved[1130]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 13 00:47:23.214239 systemd[1]: Started systemd-timesyncd.service. Sep 13 00:47:23.214777 systemd[1]: Reached target time-set.target. Sep 13 00:47:23.220254 systemd-resolved[1130]: Using system hostname 'ci-3510.3.8-n-17df7d76e4'. Sep 13 00:47:23.222677 systemd[1]: Started systemd-resolved.service. Sep 13 00:47:23.223209 systemd[1]: Reached target network.target. Sep 13 00:47:23.223724 systemd[1]: Reached target network-online.target. Sep 13 00:47:23.224087 systemd[1]: Reached target nss-lookup.target. Sep 13 00:47:23.224496 systemd[1]: Reached target sysinit.target. Sep 13 00:47:23.224914 systemd[1]: Started motdgen.path. Sep 13 00:47:23.225299 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 13 00:47:23.225990 systemd[1]: Started logrotate.timer. Sep 13 00:47:23.226367 systemd[1]: Started mdadm.timer. Sep 13 00:47:23.226647 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 13 00:47:23.227022 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:47:23.227057 systemd[1]: Reached target paths.target. Sep 13 00:47:23.227365 systemd[1]: Reached target timers.target. Sep 13 00:47:23.228115 systemd[1]: Listening on dbus.socket. Sep 13 00:47:23.229705 systemd[1]: Starting docker.socket... Sep 13 00:47:23.234586 systemd[1]: Listening on sshd.socket. Sep 13 00:47:23.235143 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:23.235795 systemd[1]: Listening on docker.socket. Sep 13 00:47:23.236269 systemd[1]: Reached target sockets.target. Sep 13 00:47:23.236604 systemd[1]: Reached target basic.target. Sep 13 00:47:23.236935 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.236964 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 13 00:47:23.238338 systemd[1]: Starting containerd.service... Sep 13 00:47:23.240386 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Sep 13 00:47:23.246216 systemd[1]: Starting dbus.service... Sep 13 00:47:23.248806 systemd[1]: Starting enable-oem-cloudinit.service... Sep 13 00:47:23.252410 systemd[1]: Starting extend-filesystems.service... Sep 13 00:47:23.252948 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 13 00:47:23.258170 systemd[1]: Starting kubelet.service... Sep 13 00:47:23.260926 systemd[1]: Starting motdgen.service... Sep 13 00:47:23.265260 systemd[1]: Starting prepare-helm.service... Sep 13 00:47:23.269287 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 13 00:47:23.275306 systemd[1]: Starting sshd-keygen.service... Sep 13 00:47:23.281735 systemd[1]: Starting systemd-logind.service... Sep 13 00:47:23.282480 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 13 00:47:23.282607 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:47:23.283509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:47:23.286497 systemd[1]: Starting update-engine.service... Sep 13 00:47:23.293037 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 13 00:47:23.723094 systemd-timesyncd[1131]: Contacted time server 159.89.45.132:123 (0.flatcar.pool.ntp.org). Sep 13 00:47:23.723183 systemd-timesyncd[1131]: Initial clock synchronization to Sat 2025-09-13 00:47:23.720996 UTC. Sep 13 00:47:23.723266 systemd-resolved[1130]: Clock change detected. Flushing caches. Sep 13 00:47:23.746591 jq[1188]: true Sep 13 00:47:23.752163 systemd-networkd[1003]: eth0: Gained IPv6LL Sep 13 00:47:23.753562 jq[1169]: false Sep 13 00:47:23.761176 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:47:23.761492 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 13 00:47:23.763057 dbus-daemon[1168]: [system] SELinux support is enabled Sep 13 00:47:23.763688 systemd[1]: Started dbus.service. Sep 13 00:47:23.767573 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:47:23.767645 systemd[1]: Reached target system-config.target. Sep 13 00:47:23.768222 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:47:23.768252 systemd[1]: Reached target user-config.target. Sep 13 00:47:23.779297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:47:23.795314 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 13 00:47:23.796279 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:47:23.796508 systemd[1]: Finished motdgen.service. Sep 13 00:47:23.798013 extend-filesystems[1170]: Found loop1 Sep 13 00:47:23.798986 extend-filesystems[1170]: Found vda Sep 13 00:47:23.799439 extend-filesystems[1170]: Found vda1 Sep 13 00:47:23.799905 extend-filesystems[1170]: Found vda2 Sep 13 00:47:23.801127 tar[1190]: linux-amd64/LICENSE Sep 13 00:47:23.801424 tar[1190]: linux-amd64/helm Sep 13 00:47:23.805028 extend-filesystems[1170]: Found vda3 Sep 13 00:47:23.805600 extend-filesystems[1170]: Found usr Sep 13 00:47:23.806251 extend-filesystems[1170]: Found vda4 Sep 13 00:47:23.806701 extend-filesystems[1170]: Found vda6 Sep 13 00:47:23.807361 extend-filesystems[1170]: Found vda7 Sep 13 00:47:23.807846 extend-filesystems[1170]: Found vda9 Sep 13 00:47:23.808962 extend-filesystems[1170]: Checking size of /dev/vda9 Sep 13 00:47:23.811564 jq[1195]: true Sep 13 00:47:23.869815 update_engine[1187]: I0913 00:47:23.869149 1187 main.cc:92] Flatcar Update Engine starting Sep 13 00:47:23.881497 systemd[1]: Started update-engine.service. Sep 13 00:47:23.881869 update_engine[1187]: I0913 00:47:23.881814 1187 update_check_scheduler.cc:74] Next update check in 6m52s Sep 13 00:47:23.885308 systemd[1]: Started locksmithd.service. Sep 13 00:47:23.891335 extend-filesystems[1170]: Resized partition /dev/vda9 Sep 13 00:47:23.912962 extend-filesystems[1217]: resize2fs 1.46.5 (30-Dec-2021) Sep 13 00:47:23.926919 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:47:23.976215 bash[1221]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:47:23.976960 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 13 00:47:23.978471 env[1193]: time="2025-09-13T00:47:23.978401802Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 13 00:47:24.036952 systemd-logind[1182]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:47:24.037043 systemd-logind[1182]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:47:24.037395 systemd-logind[1182]: New seat seat0. Sep 13 00:47:24.043228 systemd[1]: Started systemd-logind.service. Sep 13 00:47:24.073688 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:47:24.088046 coreos-metadata[1165]: Sep 13 00:47:24.085 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:47:24.095605 extend-filesystems[1217]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:47:24.095605 extend-filesystems[1217]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:47:24.095605 extend-filesystems[1217]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:47:24.102251 extend-filesystems[1170]: Resized filesystem in /dev/vda9 Sep 13 00:47:24.102251 extend-filesystems[1170]: Found vdb Sep 13 00:47:24.096102 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:47:24.103780 env[1193]: time="2025-09-13T00:47:24.102436231Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:47:24.103780 env[1193]: time="2025-09-13T00:47:24.102728536Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.096311 systemd[1]: Finished extend-filesystems.service. Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106095810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106141544Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106428090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106447525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106462717Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106473409Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106555469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106821333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106974078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:47:24.107350 env[1193]: time="2025-09-13T00:47:24.106989675Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:47:24.107625 env[1193]: time="2025-09-13T00:47:24.107058439Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 13 00:47:24.107625 env[1193]: time="2025-09-13T00:47:24.107075410Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:47:24.110900 coreos-metadata[1165]: Sep 13 00:47:24.109 INFO Fetch successful Sep 13 00:47:24.128983 env[1193]: time="2025-09-13T00:47:24.128929765Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:47:24.129193 env[1193]: time="2025-09-13T00:47:24.129173914Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:47:24.129293 env[1193]: time="2025-09-13T00:47:24.129249490Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:47:24.129406 env[1193]: time="2025-09-13T00:47:24.129390192Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129480 env[1193]: time="2025-09-13T00:47:24.129464573Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129544 env[1193]: time="2025-09-13T00:47:24.129531217Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129607 env[1193]: time="2025-09-13T00:47:24.129593875Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129678 env[1193]: time="2025-09-13T00:47:24.129664270Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129746 env[1193]: time="2025-09-13T00:47:24.129732516Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129814 env[1193]: time="2025-09-13T00:47:24.129800651Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129909 env[1193]: time="2025-09-13T00:47:24.129893504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.129978 env[1193]: time="2025-09-13T00:47:24.129965026Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:47:24.130174 env[1193]: time="2025-09-13T00:47:24.130156320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:47:24.130345 env[1193]: time="2025-09-13T00:47:24.130330411Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:47:24.132785 env[1193]: time="2025-09-13T00:47:24.132751572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:47:24.133819 env[1193]: time="2025-09-13T00:47:24.133785942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.134035 unknown[1165]: wrote ssh authorized keys file for user: core Sep 13 00:47:24.134390 env[1193]: time="2025-09-13T00:47:24.134367798Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:47:24.134597 env[1193]: time="2025-09-13T00:47:24.134579480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.135594 env[1193]: time="2025-09-13T00:47:24.135565608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.135932 env[1193]: time="2025-09-13T00:47:24.135907714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.136028 env[1193]: time="2025-09-13T00:47:24.136012170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.139408 env[1193]: time="2025-09-13T00:47:24.139380590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142013 env[1193]: time="2025-09-13T00:47:24.141808001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142013 env[1193]: time="2025-09-13T00:47:24.141895082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142013 env[1193]: time="2025-09-13T00:47:24.141912016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142013 env[1193]: time="2025-09-13T00:47:24.141934618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:47:24.142479 env[1193]: time="2025-09-13T00:47:24.142458599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142812 env[1193]: time="2025-09-13T00:47:24.142791595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.142934 env[1193]: time="2025-09-13T00:47:24.142917496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.143001 env[1193]: time="2025-09-13T00:47:24.142988049Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:47:24.143072 env[1193]: time="2025-09-13T00:47:24.143055490Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 13 00:47:24.143128 env[1193]: time="2025-09-13T00:47:24.143115562Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:47:24.143202 env[1193]: time="2025-09-13T00:47:24.143188328Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 13 00:47:24.144457 env[1193]: time="2025-09-13T00:47:24.144290768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:47:24.145394 env[1193]: time="2025-09-13T00:47:24.145334002Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:47:24.147233 env[1193]: time="2025-09-13T00:47:24.146810282Z" level=info msg="Connect containerd service" Sep 13 00:47:24.147233 env[1193]: time="2025-09-13T00:47:24.146897916Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:47:24.148486 update-ssh-keys[1228]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:47:24.149111 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Sep 13 00:47:24.154369 env[1193]: time="2025-09-13T00:47:24.154327003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:47:24.156876 env[1193]: time="2025-09-13T00:47:24.156825604Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:47:24.162219 env[1193]: time="2025-09-13T00:47:24.162185039Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:47:24.162399 env[1193]: time="2025-09-13T00:47:24.162384020Z" level=info msg="containerd successfully booted in 0.189586s" Sep 13 00:47:24.162589 systemd[1]: Started containerd.service. Sep 13 00:47:24.164357 env[1193]: time="2025-09-13T00:47:24.164274867Z" level=info msg="Start subscribing containerd event" Sep 13 00:47:24.164786 env[1193]: time="2025-09-13T00:47:24.164468686Z" level=info msg="Start recovering state" Sep 13 00:47:24.164786 env[1193]: time="2025-09-13T00:47:24.164608702Z" level=info msg="Start event monitor" Sep 13 00:47:24.164786 env[1193]: time="2025-09-13T00:47:24.164631610Z" level=info msg="Start snapshots syncer" Sep 13 00:47:24.164786 env[1193]: time="2025-09-13T00:47:24.164650324Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:47:24.164786 env[1193]: time="2025-09-13T00:47:24.164767090Z" level=info msg="Start streaming server" Sep 13 00:47:24.838885 locksmithd[1214]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:47:25.049684 tar[1190]: linux-amd64/README.md Sep 13 00:47:25.058227 systemd[1]: Finished prepare-helm.service. Sep 13 00:47:25.486088 sshd_keygen[1192]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:47:25.489905 systemd[1]: Started kubelet.service. Sep 13 00:47:25.519534 systemd[1]: Finished sshd-keygen.service. Sep 13 00:47:25.522368 systemd[1]: Starting issuegen.service... Sep 13 00:47:25.534924 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:47:25.535189 systemd[1]: Finished issuegen.service. Sep 13 00:47:25.538193 systemd[1]: Starting systemd-user-sessions.service... Sep 13 00:47:25.551624 systemd[1]: Finished systemd-user-sessions.service. Sep 13 00:47:25.554492 systemd[1]: Started getty@tty1.service. Sep 13 00:47:25.557622 systemd[1]: Started serial-getty@ttyS0.service. Sep 13 00:47:25.558653 systemd[1]: Reached target getty.target. Sep 13 00:47:25.559115 systemd[1]: Reached target multi-user.target. Sep 13 00:47:25.562134 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 13 00:47:25.575901 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 13 00:47:25.576131 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 13 00:47:25.576893 systemd[1]: Startup finished in 905ms (kernel) + 5.624s (initrd) + 7.720s (userspace) = 14.250s. Sep 13 00:47:25.973337 systemd[1]: Created slice system-sshd.slice. Sep 13 00:47:25.975738 systemd[1]: Started sshd@0-146.190.148.102:22-147.75.109.163:57230.service. Sep 13 00:47:26.045105 sshd[1257]: Accepted publickey for core from 147.75.109.163 port 57230 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:47:26.047733 sshd[1257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.062141 systemd[1]: Created slice user-500.slice. Sep 13 00:47:26.066568 systemd[1]: Starting user-runtime-dir@500.service... Sep 13 00:47:26.077245 systemd-logind[1182]: New session 1 of user core. Sep 13 00:47:26.085089 systemd[1]: Finished user-runtime-dir@500.service. Sep 13 00:47:26.087617 systemd[1]: Starting user@500.service... Sep 13 00:47:26.094688 (systemd)[1261]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.194355 systemd[1261]: Queued start job for default target default.target. Sep 13 00:47:26.195265 systemd[1261]: Reached target paths.target. Sep 13 00:47:26.195299 systemd[1261]: Reached target sockets.target. Sep 13 00:47:26.195320 systemd[1261]: Reached target timers.target. Sep 13 00:47:26.195340 systemd[1261]: Reached target basic.target. Sep 13 00:47:26.195428 systemd[1261]: Reached target default.target. Sep 13 00:47:26.195504 systemd[1261]: Startup finished in 88ms. Sep 13 00:47:26.195524 systemd[1]: Started user@500.service. Sep 13 00:47:26.196795 systemd[1]: Started session-1.scope. Sep 13 00:47:26.221407 kubelet[1239]: E0913 00:47:26.221349 1239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:47:26.224035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:47:26.224180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:47:26.224517 systemd[1]: kubelet.service: Consumed 1.288s CPU time. Sep 13 00:47:26.261761 systemd[1]: Started sshd@1-146.190.148.102:22-147.75.109.163:57240.service. Sep 13 00:47:26.312204 sshd[1270]: Accepted publickey for core from 147.75.109.163 port 57240 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:47:26.314006 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.320084 systemd[1]: Started session-2.scope. Sep 13 00:47:26.320727 systemd-logind[1182]: New session 2 of user core. Sep 13 00:47:26.391172 sshd[1270]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:26.396761 systemd[1]: sshd@1-146.190.148.102:22-147.75.109.163:57240.service: Deactivated successfully. Sep 13 00:47:26.397600 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:47:26.399021 systemd-logind[1182]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:47:26.400755 systemd[1]: Started sshd@2-146.190.148.102:22-147.75.109.163:57248.service. Sep 13 00:47:26.402939 systemd-logind[1182]: Removed session 2. Sep 13 00:47:26.450825 sshd[1276]: Accepted publickey for core from 147.75.109.163 port 57248 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:47:26.453558 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.459600 systemd-logind[1182]: New session 3 of user core. Sep 13 00:47:26.459741 systemd[1]: Started session-3.scope. Sep 13 00:47:26.522013 sshd[1276]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:26.528022 systemd[1]: sshd@2-146.190.148.102:22-147.75.109.163:57248.service: Deactivated successfully. Sep 13 00:47:26.529139 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:47:26.530053 systemd-logind[1182]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:47:26.532008 systemd[1]: Started sshd@3-146.190.148.102:22-147.75.109.163:57260.service. Sep 13 00:47:26.534069 systemd-logind[1182]: Removed session 3. Sep 13 00:47:26.587150 sshd[1282]: Accepted publickey for core from 147.75.109.163 port 57260 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:47:26.590103 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.596946 systemd-logind[1182]: New session 4 of user core. Sep 13 00:47:26.598520 systemd[1]: Started session-4.scope. Sep 13 00:47:26.664717 sshd[1282]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:26.674412 systemd[1]: Started sshd@4-146.190.148.102:22-147.75.109.163:57274.service. Sep 13 00:47:26.675164 systemd[1]: sshd@3-146.190.148.102:22-147.75.109.163:57260.service: Deactivated successfully. Sep 13 00:47:26.676459 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:47:26.677335 systemd-logind[1182]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:47:26.678905 systemd-logind[1182]: Removed session 4. Sep 13 00:47:26.726226 sshd[1287]: Accepted publickey for core from 147.75.109.163 port 57274 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:47:26.728405 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:47:26.732650 systemd-logind[1182]: New session 5 of user core. Sep 13 00:47:26.733630 systemd[1]: Started session-5.scope. Sep 13 00:47:26.805510 sudo[1291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:47:26.806584 sudo[1291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 13 00:47:26.845943 systemd[1]: Starting docker.service... Sep 13 00:47:26.903354 env[1301]: time="2025-09-13T00:47:26.903282200Z" level=info msg="Starting up" Sep 13 00:47:26.906189 env[1301]: time="2025-09-13T00:47:26.906145815Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:47:26.906189 env[1301]: time="2025-09-13T00:47:26.906171915Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:47:26.906189 env[1301]: time="2025-09-13T00:47:26.906192937Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:47:26.906396 env[1301]: time="2025-09-13T00:47:26.906205038Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:47:26.908662 env[1301]: time="2025-09-13T00:47:26.908620135Z" level=info msg="parsed scheme: \"unix\"" module=grpc Sep 13 00:47:26.908662 env[1301]: time="2025-09-13T00:47:26.908644261Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Sep 13 00:47:26.908662 env[1301]: time="2025-09-13T00:47:26.908660122Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Sep 13 00:47:26.908662 env[1301]: time="2025-09-13T00:47:26.908669842Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Sep 13 00:47:26.963570 env[1301]: time="2025-09-13T00:47:26.962699502Z" level=info msg="Loading containers: start." Sep 13 00:47:27.133938 kernel: Initializing XFRM netlink socket Sep 13 00:47:27.173693 env[1301]: time="2025-09-13T00:47:27.173635349Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Sep 13 00:47:27.257260 systemd-networkd[1003]: docker0: Link UP Sep 13 00:47:27.271769 env[1301]: time="2025-09-13T00:47:27.271727489Z" level=info msg="Loading containers: done." Sep 13 00:47:27.285219 env[1301]: time="2025-09-13T00:47:27.285165055Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:47:27.285413 env[1301]: time="2025-09-13T00:47:27.285394549Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Sep 13 00:47:27.285545 env[1301]: time="2025-09-13T00:47:27.285522487Z" level=info msg="Daemon has completed initialization" Sep 13 00:47:27.303016 systemd[1]: Started docker.service. Sep 13 00:47:27.308902 env[1301]: time="2025-09-13T00:47:27.308827746Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:47:27.346000 systemd[1]: Starting coreos-metadata.service... Sep 13 00:47:27.394602 coreos-metadata[1418]: Sep 13 00:47:27.394 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:47:27.407106 coreos-metadata[1418]: Sep 13 00:47:27.406 INFO Fetch successful Sep 13 00:47:27.419676 systemd[1]: Finished coreos-metadata.service. Sep 13 00:47:28.347294 env[1193]: time="2025-09-13T00:47:28.347247936Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 00:47:28.948777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2426074112.mount: Deactivated successfully. Sep 13 00:47:30.501139 env[1193]: time="2025-09-13T00:47:30.501074169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:30.502838 env[1193]: time="2025-09-13T00:47:30.502791840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:30.504911 env[1193]: time="2025-09-13T00:47:30.504880656Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:30.506998 env[1193]: time="2025-09-13T00:47:30.506961748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:30.507943 env[1193]: time="2025-09-13T00:47:30.507907787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 00:47:30.508693 env[1193]: time="2025-09-13T00:47:30.508668829Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 00:47:32.229154 env[1193]: time="2025-09-13T00:47:32.229060842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:32.231923 env[1193]: time="2025-09-13T00:47:32.231804616Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:32.234611 env[1193]: time="2025-09-13T00:47:32.234565527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:32.240210 env[1193]: time="2025-09-13T00:47:32.240131654Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 00:47:32.240942 env[1193]: time="2025-09-13T00:47:32.240900373Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 00:47:32.241165 env[1193]: time="2025-09-13T00:47:32.241135773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:33.753057 env[1193]: time="2025-09-13T00:47:33.752995408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:33.754636 env[1193]: time="2025-09-13T00:47:33.754599427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:33.756851 env[1193]: time="2025-09-13T00:47:33.756808893Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:33.759702 env[1193]: time="2025-09-13T00:47:33.759647824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:33.760522 env[1193]: time="2025-09-13T00:47:33.760485867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 00:47:33.761427 env[1193]: time="2025-09-13T00:47:33.761388947Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 00:47:35.190421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797462.mount: Deactivated successfully. Sep 13 00:47:36.346516 env[1193]: time="2025-09-13T00:47:36.346454987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:36.349187 env[1193]: time="2025-09-13T00:47:36.349129611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:36.351635 env[1193]: time="2025-09-13T00:47:36.351578479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:36.353816 env[1193]: time="2025-09-13T00:47:36.353760557Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:36.355496 env[1193]: time="2025-09-13T00:47:36.354663509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 00:47:36.356460 env[1193]: time="2025-09-13T00:47:36.356418939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:47:36.475767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:47:36.476087 systemd[1]: Stopped kubelet.service. Sep 13 00:47:36.476156 systemd[1]: kubelet.service: Consumed 1.288s CPU time. Sep 13 00:47:36.478370 systemd[1]: Starting kubelet.service... Sep 13 00:47:36.638911 systemd[1]: Started kubelet.service. Sep 13 00:47:36.701644 kubelet[1441]: E0913 00:47:36.701567 1441 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:47:36.705173 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:47:36.705356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:47:36.922287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2184140426.mount: Deactivated successfully. Sep 13 00:47:37.906538 env[1193]: time="2025-09-13T00:47:37.906477554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:37.908053 env[1193]: time="2025-09-13T00:47:37.908012898Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:37.909632 env[1193]: time="2025-09-13T00:47:37.909596104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:37.911423 env[1193]: time="2025-09-13T00:47:37.911391526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:37.912411 env[1193]: time="2025-09-13T00:47:37.912371198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:47:37.913014 env[1193]: time="2025-09-13T00:47:37.912983745Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:47:38.454719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241003678.mount: Deactivated successfully. Sep 13 00:47:38.458721 env[1193]: time="2025-09-13T00:47:38.458661060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:38.460717 env[1193]: time="2025-09-13T00:47:38.460668353Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:38.461980 env[1193]: time="2025-09-13T00:47:38.461949367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:38.463925 env[1193]: time="2025-09-13T00:47:38.463889303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:38.464891 env[1193]: time="2025-09-13T00:47:38.464840495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:47:38.465350 env[1193]: time="2025-09-13T00:47:38.465322094Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 00:47:38.984386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236005199.mount: Deactivated successfully. Sep 13 00:47:41.297096 env[1193]: time="2025-09-13T00:47:41.297017393Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:41.298978 env[1193]: time="2025-09-13T00:47:41.298936517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:41.300899 env[1193]: time="2025-09-13T00:47:41.300807709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:41.302890 env[1193]: time="2025-09-13T00:47:41.302828230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:41.303842 env[1193]: time="2025-09-13T00:47:41.303806045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 00:47:44.585479 systemd[1]: Stopped kubelet.service. Sep 13 00:47:44.589267 systemd[1]: Starting kubelet.service... Sep 13 00:47:44.637589 systemd[1]: Reloading. Sep 13 00:47:44.746371 /usr/lib/systemd/system-generators/torcx-generator[1489]: time="2025-09-13T00:47:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:44.746455 /usr/lib/systemd/system-generators/torcx-generator[1489]: time="2025-09-13T00:47:44Z" level=info msg="torcx already run" Sep 13 00:47:44.884662 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:44.884681 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:44.905214 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:45.021804 systemd[1]: Started kubelet.service. Sep 13 00:47:45.027460 systemd[1]: Stopping kubelet.service... Sep 13 00:47:45.028743 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:47:45.029256 systemd[1]: Stopped kubelet.service. Sep 13 00:47:45.033180 systemd[1]: Starting kubelet.service... Sep 13 00:47:45.152546 systemd[1]: Started kubelet.service. Sep 13 00:47:45.239171 kubelet[1549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:47:45.239171 kubelet[1549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:47:45.239171 kubelet[1549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:47:45.239823 kubelet[1549]: I0913 00:47:45.239266 1549 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:47:45.837302 kubelet[1549]: I0913 00:47:45.837238 1549 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:47:45.837614 kubelet[1549]: I0913 00:47:45.837594 1549 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:47:45.838556 kubelet[1549]: I0913 00:47:45.838526 1549 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:47:45.871887 kubelet[1549]: E0913 00:47:45.871815 1549 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.148.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:45.874217 kubelet[1549]: I0913 00:47:45.874180 1549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:47:45.887881 kubelet[1549]: E0913 00:47:45.887813 1549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:47:45.888173 kubelet[1549]: I0913 00:47:45.888149 1549 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:47:45.893705 kubelet[1549]: I0913 00:47:45.893659 1549 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:47:45.894444 kubelet[1549]: I0913 00:47:45.894405 1549 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:47:45.894921 kubelet[1549]: I0913 00:47:45.894573 1549 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-17df7d76e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:47:45.895236 kubelet[1549]: I0913 00:47:45.895216 1549 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:47:45.895345 kubelet[1549]: I0913 00:47:45.895330 1549 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:47:45.895634 kubelet[1549]: I0913 00:47:45.895617 1549 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:47:45.901168 kubelet[1549]: I0913 00:47:45.901125 1549 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:47:45.901528 kubelet[1549]: I0913 00:47:45.901506 1549 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:47:45.902736 kubelet[1549]: I0913 00:47:45.902712 1549 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:47:45.902915 kubelet[1549]: I0913 00:47:45.902897 1549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:47:45.905048 kubelet[1549]: W0913 00:47:45.904986 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.148.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-17df7d76e4&limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:45.905230 kubelet[1549]: E0913 00:47:45.905201 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.148.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-17df7d76e4&limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:45.908923 kubelet[1549]: I0913 00:47:45.908898 1549 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:47:45.909383 kubelet[1549]: I0913 00:47:45.909361 1549 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:47:45.910014 kubelet[1549]: W0913 00:47:45.909988 1549 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:47:45.915560 kubelet[1549]: I0913 00:47:45.915518 1549 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:47:45.915704 kubelet[1549]: I0913 00:47:45.915571 1549 server.go:1287] "Started kubelet" Sep 13 00:47:45.916827 kubelet[1549]: W0913 00:47:45.915736 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.148.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:45.916827 kubelet[1549]: E0913 00:47:45.915786 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.148.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:45.931124 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 13 00:47:45.931327 kubelet[1549]: I0913 00:47:45.931267 1549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:47:45.938077 kubelet[1549]: E0913 00:47:45.936266 1549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.148.102:6443/api/v1/namespaces/default/events\": dial tcp 146.190.148.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-17df7d76e4.1864b116c3e8cdd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-17df7d76e4,UID:ci-3510.3.8-n-17df7d76e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-17df7d76e4,},FirstTimestamp:2025-09-13 00:47:45.915547088 +0000 UTC m=+0.755874579,LastTimestamp:2025-09-13 00:47:45.915547088 +0000 UTC m=+0.755874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-17df7d76e4,}" Sep 13 00:47:45.939180 kubelet[1549]: I0913 00:47:45.939111 1549 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:47:45.940564 kubelet[1549]: I0913 00:47:45.940324 1549 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:47:45.942327 kubelet[1549]: I0913 00:47:45.942242 1549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:47:45.942621 kubelet[1549]: I0913 00:47:45.942599 1549 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:47:45.943157 kubelet[1549]: I0913 00:47:45.943131 1549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:47:45.943430 kubelet[1549]: E0913 00:47:45.943404 1549 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" Sep 13 00:47:45.943430 kubelet[1549]: I0913 00:47:45.943235 1549 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:47:45.945779 kubelet[1549]: I0913 00:47:45.943206 1549 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:47:45.946381 kubelet[1549]: I0913 00:47:45.946358 1549 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:47:45.946823 kubelet[1549]: E0913 00:47:45.946763 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.148.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-17df7d76e4?timeout=10s\": dial tcp 146.190.148.102:6443: connect: connection refused" interval="200ms" Sep 13 00:47:45.947439 kubelet[1549]: I0913 00:47:45.947417 1549 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:47:45.947708 kubelet[1549]: I0913 00:47:45.947683 1549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:47:45.949278 kubelet[1549]: E0913 00:47:45.949252 1549 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:47:45.949637 kubelet[1549]: I0913 00:47:45.949619 1549 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:47:45.968129 kubelet[1549]: I0913 00:47:45.968064 1549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:47:45.969354 kubelet[1549]: I0913 00:47:45.969322 1549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:47:45.969496 kubelet[1549]: I0913 00:47:45.969366 1549 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:47:45.969496 kubelet[1549]: I0913 00:47:45.969398 1549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:47:45.969496 kubelet[1549]: I0913 00:47:45.969408 1549 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:47:45.969496 kubelet[1549]: E0913 00:47:45.969469 1549 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:47:45.971726 kubelet[1549]: W0913 00:47:45.971677 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.148.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:45.971905 kubelet[1549]: E0913 00:47:45.971734 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.148.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:45.974918 kubelet[1549]: W0913 00:47:45.974865 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.148.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:45.975065 kubelet[1549]: E0913 00:47:45.974924 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.148.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:45.979697 kubelet[1549]: I0913 00:47:45.979671 1549 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:47:45.979697 kubelet[1549]: I0913 00:47:45.979686 1549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:47:45.979697 kubelet[1549]: I0913 00:47:45.979705 1549 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:47:45.981608 kubelet[1549]: I0913 00:47:45.981575 1549 policy_none.go:49] "None policy: Start" Sep 13 00:47:45.981608 kubelet[1549]: I0913 00:47:45.981605 1549 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:47:45.981774 kubelet[1549]: I0913 00:47:45.981622 1549 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:47:45.989833 systemd[1]: Created slice kubepods.slice. Sep 13 00:47:45.998375 systemd[1]: Created slice kubepods-besteffort.slice. Sep 13 00:47:46.008386 systemd[1]: Created slice kubepods-burstable.slice. Sep 13 00:47:46.010327 kubelet[1549]: I0913 00:47:46.010291 1549 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:47:46.011846 kubelet[1549]: I0913 00:47:46.011143 1549 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:47:46.011846 kubelet[1549]: I0913 00:47:46.011160 1549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:47:46.011846 kubelet[1549]: I0913 00:47:46.011540 1549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:47:46.012621 kubelet[1549]: E0913 00:47:46.012592 1549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:47:46.012846 kubelet[1549]: E0913 00:47:46.012826 1549 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-17df7d76e4\" not found" Sep 13 00:47:46.082511 systemd[1]: Created slice kubepods-burstable-podc13fcec80446876b10f61b39445a1262.slice. Sep 13 00:47:46.093958 kubelet[1549]: E0913 00:47:46.090814 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.094981 systemd[1]: Created slice kubepods-burstable-pod87c329bc66e8731f53abd330e3cdfad4.slice. Sep 13 00:47:46.102411 kubelet[1549]: E0913 00:47:46.102357 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.105125 systemd[1]: Created slice kubepods-burstable-pod3f81f27e5a7d8617e51ef26f289f079b.slice. Sep 13 00:47:46.107542 kubelet[1549]: E0913 00:47:46.107509 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.113125 kubelet[1549]: I0913 00:47:46.113082 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.113495 kubelet[1549]: E0913 00:47:46.113469 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.148.102:6443/api/v1/nodes\": dial tcp 146.190.148.102:6443: connect: connection refused" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.148114 kubelet[1549]: I0913 00:47:46.148045 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.148114 kubelet[1549]: I0913 00:47:46.148091 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.148114 kubelet[1549]: I0913 00:47:46.148114 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150423 kubelet[1549]: I0913 00:47:46.150375 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150423 kubelet[1549]: I0913 00:47:46.150430 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150626 kubelet[1549]: I0913 00:47:46.150448 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150626 kubelet[1549]: I0913 00:47:46.150467 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87c329bc66e8731f53abd330e3cdfad4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-17df7d76e4\" (UID: \"87c329bc66e8731f53abd330e3cdfad4\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150626 kubelet[1549]: I0913 00:47:46.150482 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.150626 kubelet[1549]: I0913 00:47:46.150497 1549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.151187 kubelet[1549]: E0913 00:47:46.151147 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.148.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-17df7d76e4?timeout=10s\": dial tcp 146.190.148.102:6443: connect: connection refused" interval="400ms" Sep 13 00:47:46.315966 kubelet[1549]: I0913 00:47:46.315915 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.316815 kubelet[1549]: E0913 00:47:46.316763 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.148.102:6443/api/v1/nodes\": dial tcp 146.190.148.102:6443: connect: connection refused" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.394272 kubelet[1549]: E0913 00:47:46.393109 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:46.396043 env[1193]: time="2025-09-13T00:47:46.395969218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-17df7d76e4,Uid:c13fcec80446876b10f61b39445a1262,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:46.404424 kubelet[1549]: E0913 00:47:46.404356 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:46.405618 env[1193]: time="2025-09-13T00:47:46.405190581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-17df7d76e4,Uid:87c329bc66e8731f53abd330e3cdfad4,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:46.408436 kubelet[1549]: E0913 00:47:46.408397 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:46.409339 env[1193]: time="2025-09-13T00:47:46.409303785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-17df7d76e4,Uid:3f81f27e5a7d8617e51ef26f289f079b,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:46.552504 kubelet[1549]: E0913 00:47:46.552430 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.148.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-17df7d76e4?timeout=10s\": dial tcp 146.190.148.102:6443: connect: connection refused" interval="800ms" Sep 13 00:47:46.614253 kubelet[1549]: E0913 00:47:46.614105 1549 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.148.102:6443/api/v1/namespaces/default/events\": dial tcp 146.190.148.102:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-17df7d76e4.1864b116c3e8cdd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-17df7d76e4,UID:ci-3510.3.8-n-17df7d76e4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-17df7d76e4,},FirstTimestamp:2025-09-13 00:47:45.915547088 +0000 UTC m=+0.755874579,LastTimestamp:2025-09-13 00:47:45.915547088 +0000 UTC m=+0.755874579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-17df7d76e4,}" Sep 13 00:47:46.719074 kubelet[1549]: I0913 00:47:46.718472 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.719302 kubelet[1549]: E0913 00:47:46.719267 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.148.102:6443/api/v1/nodes\": dial tcp 146.190.148.102:6443: connect: connection refused" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:46.807361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010106232.mount: Deactivated successfully. Sep 13 00:47:46.812757 env[1193]: time="2025-09-13T00:47:46.812703447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.814690 env[1193]: time="2025-09-13T00:47:46.814623060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.815590 env[1193]: time="2025-09-13T00:47:46.815559190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.816358 env[1193]: time="2025-09-13T00:47:46.816321242Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.817419 env[1193]: time="2025-09-13T00:47:46.817384382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.819050 env[1193]: time="2025-09-13T00:47:46.819019343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.822371 env[1193]: time="2025-09-13T00:47:46.822321451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.825768 env[1193]: time="2025-09-13T00:47:46.825721691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.827388 env[1193]: time="2025-09-13T00:47:46.827339348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.828693 env[1193]: time="2025-09-13T00:47:46.828659509Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.829512 env[1193]: time="2025-09-13T00:47:46.829483635Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.830280 env[1193]: time="2025-09-13T00:47:46.830253742Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:47:46.847322 kubelet[1549]: W0913 00:47:46.847258 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.148.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-17df7d76e4&limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:46.847322 kubelet[1549]: E0913 00:47:46.847325 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.148.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-17df7d76e4&limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:46.860577 env[1193]: time="2025-09-13T00:47:46.860497596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:46.860726 env[1193]: time="2025-09-13T00:47:46.860546893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:46.860726 env[1193]: time="2025-09-13T00:47:46.860557422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:46.860726 env[1193]: time="2025-09-13T00:47:46.860699875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17b555188dddcc41ec4c259f521fe5d6da44f844f2e026a508c171f93066e2d3 pid=1590 runtime=io.containerd.runc.v2 Sep 13 00:47:46.871670 env[1193]: time="2025-09-13T00:47:46.871576657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:46.871948 env[1193]: time="2025-09-13T00:47:46.871921480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:46.872060 env[1193]: time="2025-09-13T00:47:46.872038663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:46.873516 env[1193]: time="2025-09-13T00:47:46.873417998Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fffe27f3536fd38eaa3264d35d61422884f3d05154d5c65d75c10cc4f71cb7f6 pid=1608 runtime=io.containerd.runc.v2 Sep 13 00:47:46.875098 env[1193]: time="2025-09-13T00:47:46.874987965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:46.875367 env[1193]: time="2025-09-13T00:47:46.875335829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:46.875498 env[1193]: time="2025-09-13T00:47:46.875471879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:46.875806 env[1193]: time="2025-09-13T00:47:46.875767217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e77c11a43b6155cf40b83206695dba51723483c660a1fc5ed6526d8e07f7713 pid=1611 runtime=io.containerd.runc.v2 Sep 13 00:47:46.896943 systemd[1]: Started cri-containerd-17b555188dddcc41ec4c259f521fe5d6da44f844f2e026a508c171f93066e2d3.scope. Sep 13 00:47:46.903040 systemd[1]: Started cri-containerd-6e77c11a43b6155cf40b83206695dba51723483c660a1fc5ed6526d8e07f7713.scope. Sep 13 00:47:46.937881 systemd[1]: Started cri-containerd-fffe27f3536fd38eaa3264d35d61422884f3d05154d5c65d75c10cc4f71cb7f6.scope. Sep 13 00:47:47.016928 env[1193]: time="2025-09-13T00:47:47.016116358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-17df7d76e4,Uid:c13fcec80446876b10f61b39445a1262,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e77c11a43b6155cf40b83206695dba51723483c660a1fc5ed6526d8e07f7713\"" Sep 13 00:47:47.023621 kubelet[1549]: E0913 00:47:47.023568 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:47.027218 env[1193]: time="2025-09-13T00:47:47.027174902Z" level=info msg="CreateContainer within sandbox \"6e77c11a43b6155cf40b83206695dba51723483c660a1fc5ed6526d8e07f7713\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:47:47.029150 env[1193]: time="2025-09-13T00:47:47.029098261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-17df7d76e4,Uid:3f81f27e5a7d8617e51ef26f289f079b,Namespace:kube-system,Attempt:0,} returns sandbox id \"17b555188dddcc41ec4c259f521fe5d6da44f844f2e026a508c171f93066e2d3\"" Sep 13 00:47:47.046163 kubelet[1549]: E0913 00:47:47.045976 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:47.048612 env[1193]: time="2025-09-13T00:47:47.048572272Z" level=info msg="CreateContainer within sandbox \"17b555188dddcc41ec4c259f521fe5d6da44f844f2e026a508c171f93066e2d3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:47:47.049352 env[1193]: time="2025-09-13T00:47:47.048936632Z" level=info msg="CreateContainer within sandbox \"6e77c11a43b6155cf40b83206695dba51723483c660a1fc5ed6526d8e07f7713\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1407bfc43a410dd1a0ba5536772ec91c7f52930eafeecc1942cb93fdd214ae8b\"" Sep 13 00:47:47.050881 env[1193]: time="2025-09-13T00:47:47.050835015Z" level=info msg="StartContainer for \"1407bfc43a410dd1a0ba5536772ec91c7f52930eafeecc1942cb93fdd214ae8b\"" Sep 13 00:47:47.062102 kubelet[1549]: W0913 00:47:47.062028 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.148.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:47.062258 kubelet[1549]: E0913 00:47:47.062109 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.148.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:47.065172 env[1193]: time="2025-09-13T00:47:47.065119381Z" level=info msg="CreateContainer within sandbox \"17b555188dddcc41ec4c259f521fe5d6da44f844f2e026a508c171f93066e2d3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a3cd563c6f86f62d6845ee1413538305315dff314b0292e73b7c6166cab0606\"" Sep 13 00:47:47.065792 env[1193]: time="2025-09-13T00:47:47.065758563Z" level=info msg="StartContainer for \"6a3cd563c6f86f62d6845ee1413538305315dff314b0292e73b7c6166cab0606\"" Sep 13 00:47:47.074279 systemd[1]: Started cri-containerd-1407bfc43a410dd1a0ba5536772ec91c7f52930eafeecc1942cb93fdd214ae8b.scope. Sep 13 00:47:47.083617 env[1193]: time="2025-09-13T00:47:47.083569706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-17df7d76e4,Uid:87c329bc66e8731f53abd330e3cdfad4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fffe27f3536fd38eaa3264d35d61422884f3d05154d5c65d75c10cc4f71cb7f6\"" Sep 13 00:47:47.086758 kubelet[1549]: E0913 00:47:47.085135 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:47.087404 env[1193]: time="2025-09-13T00:47:47.087348361Z" level=info msg="CreateContainer within sandbox \"fffe27f3536fd38eaa3264d35d61422884f3d05154d5c65d75c10cc4f71cb7f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:47:47.109901 env[1193]: time="2025-09-13T00:47:47.108281571Z" level=info msg="CreateContainer within sandbox \"fffe27f3536fd38eaa3264d35d61422884f3d05154d5c65d75c10cc4f71cb7f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f18453d2939340508c877d61f3008447f5118c52e54aa42806308bf5ff57c3b2\"" Sep 13 00:47:47.112122 env[1193]: time="2025-09-13T00:47:47.112055119Z" level=info msg="StartContainer for \"f18453d2939340508c877d61f3008447f5118c52e54aa42806308bf5ff57c3b2\"" Sep 13 00:47:47.125221 systemd[1]: Started cri-containerd-6a3cd563c6f86f62d6845ee1413538305315dff314b0292e73b7c6166cab0606.scope. Sep 13 00:47:47.135032 kubelet[1549]: W0913 00:47:47.134902 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.148.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:47.135032 kubelet[1549]: E0913 00:47:47.134984 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.148.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:47.148816 systemd[1]: Started cri-containerd-f18453d2939340508c877d61f3008447f5118c52e54aa42806308bf5ff57c3b2.scope. Sep 13 00:47:47.169677 env[1193]: time="2025-09-13T00:47:47.169620093Z" level=info msg="StartContainer for \"1407bfc43a410dd1a0ba5536772ec91c7f52930eafeecc1942cb93fdd214ae8b\" returns successfully" Sep 13 00:47:47.211145 env[1193]: time="2025-09-13T00:47:47.211092345Z" level=info msg="StartContainer for \"6a3cd563c6f86f62d6845ee1413538305315dff314b0292e73b7c6166cab0606\" returns successfully" Sep 13 00:47:47.246768 env[1193]: time="2025-09-13T00:47:47.246705734Z" level=info msg="StartContainer for \"f18453d2939340508c877d61f3008447f5118c52e54aa42806308bf5ff57c3b2\" returns successfully" Sep 13 00:47:47.354028 kubelet[1549]: E0913 00:47:47.353871 1549 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.148.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-17df7d76e4?timeout=10s\": dial tcp 146.190.148.102:6443: connect: connection refused" interval="1.6s" Sep 13 00:47:47.412280 kubelet[1549]: W0913 00:47:47.412130 1549 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.148.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.148.102:6443: connect: connection refused Sep 13 00:47:47.412280 kubelet[1549]: E0913 00:47:47.412229 1549 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.148.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.148.102:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:47:47.520661 kubelet[1549]: I0913 00:47:47.520617 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:47.521432 kubelet[1549]: E0913 00:47:47.521401 1549 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.148.102:6443/api/v1/nodes\": dial tcp 146.190.148.102:6443: connect: connection refused" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:47.981070 kubelet[1549]: E0913 00:47:47.981031 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:47.981603 kubelet[1549]: E0913 00:47:47.981573 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:47.983604 kubelet[1549]: E0913 00:47:47.983578 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:47.983912 kubelet[1549]: E0913 00:47:47.983895 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:47.985659 kubelet[1549]: E0913 00:47:47.985638 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:47.985890 kubelet[1549]: E0913 00:47:47.985876 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:48.988185 kubelet[1549]: E0913 00:47:48.988152 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:48.988912 kubelet[1549]: E0913 00:47:48.988894 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:48.989273 kubelet[1549]: E0913 00:47:48.989256 1549 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:48.989446 kubelet[1549]: E0913 00:47:48.989432 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:49.123260 kubelet[1549]: I0913 00:47:49.123223 1549 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.829524 kubelet[1549]: E0913 00:47:49.829439 1549 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-17df7d76e4\" not found" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.836434 kubelet[1549]: I0913 00:47:49.836396 1549 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.843871 kubelet[1549]: I0913 00:47:49.843812 1549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.862502 kubelet[1549]: E0913 00:47:49.862455 1549 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-17df7d76e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.862502 kubelet[1549]: I0913 00:47:49.862499 1549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.867352 kubelet[1549]: E0913 00:47:49.867286 1549 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.867352 kubelet[1549]: I0913 00:47:49.867343 1549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.870436 kubelet[1549]: E0913 00:47:49.870373 1549 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.911131 kubelet[1549]: I0913 00:47:49.911069 1549 apiserver.go:52] "Watching apiserver" Sep 13 00:47:49.944100 kubelet[1549]: I0913 00:47:49.944063 1549 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:47:49.989263 kubelet[1549]: I0913 00:47:49.989221 1549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.992491 kubelet[1549]: E0913 00:47:49.992442 1549 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-17df7d76e4\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:49.992703 kubelet[1549]: E0913 00:47:49.992630 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:52.103818 kubelet[1549]: I0913 00:47:52.103778 1549 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:52.117206 kubelet[1549]: W0913 00:47:52.117163 1549 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:47:52.117790 kubelet[1549]: E0913 00:47:52.117764 1549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:52.147204 systemd[1]: Reloading. Sep 13 00:47:52.253445 /usr/lib/systemd/system-generators/torcx-generator[1835]: time="2025-09-13T00:47:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 13 00:47:52.253478 /usr/lib/systemd/system-generators/torcx-generator[1835]: time="2025-09-13T00:47:52Z" level=info msg="torcx already run" Sep 13 00:47:52.366588 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 13 00:47:52.366609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 13 00:47:52.386814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:47:52.508882 systemd[1]: Stopping kubelet.service... Sep 13 00:47:52.529914 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:47:52.530202 systemd[1]: Stopped kubelet.service. Sep 13 00:47:52.530270 systemd[1]: kubelet.service: Consumed 1.151s CPU time. Sep 13 00:47:52.532545 systemd[1]: Starting kubelet.service... Sep 13 00:47:53.551283 systemd[1]: Started kubelet.service. Sep 13 00:47:53.625615 kubelet[1885]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:47:53.626063 kubelet[1885]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 00:47:53.626131 kubelet[1885]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:47:53.628644 kubelet[1885]: I0913 00:47:53.628577 1885 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:47:53.645913 kubelet[1885]: I0913 00:47:53.645802 1885 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 00:47:53.645913 kubelet[1885]: I0913 00:47:53.645844 1885 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:47:53.653745 kubelet[1885]: I0913 00:47:53.653701 1885 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 00:47:53.667705 kubelet[1885]: I0913 00:47:53.667128 1885 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:47:53.672635 sudo[1900]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:47:53.674368 kubelet[1885]: I0913 00:47:53.673189 1885 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:47:53.672976 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Sep 13 00:47:53.688040 kubelet[1885]: E0913 00:47:53.687990 1885 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:47:53.688040 kubelet[1885]: I0913 00:47:53.688033 1885 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:47:53.693026 kubelet[1885]: I0913 00:47:53.692987 1885 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:47:53.693329 kubelet[1885]: I0913 00:47:53.693289 1885 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:47:53.693536 kubelet[1885]: I0913 00:47:53.693327 1885 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-17df7d76e4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 00:47:53.693653 kubelet[1885]: I0913 00:47:53.693552 1885 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:47:53.693653 kubelet[1885]: I0913 00:47:53.693570 1885 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 00:47:53.696947 kubelet[1885]: I0913 00:47:53.696909 1885 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:47:53.697133 kubelet[1885]: I0913 00:47:53.697121 1885 kubelet.go:446] "Attempting to sync node with API server" Sep 13 00:47:53.697172 kubelet[1885]: I0913 00:47:53.697145 1885 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:47:53.697886 kubelet[1885]: I0913 00:47:53.697669 1885 kubelet.go:352] "Adding apiserver pod source" Sep 13 00:47:53.710487 kubelet[1885]: I0913 00:47:53.710442 1885 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:47:53.717631 kubelet[1885]: I0913 00:47:53.717372 1885 apiserver.go:52] "Watching apiserver" Sep 13 00:47:53.721251 kubelet[1885]: I0913 00:47:53.720831 1885 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 13 00:47:53.722403 kubelet[1885]: I0913 00:47:53.722177 1885 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:47:53.724902 kubelet[1885]: I0913 00:47:53.724523 1885 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 00:47:53.724902 kubelet[1885]: I0913 00:47:53.724582 1885 server.go:1287] "Started kubelet" Sep 13 00:47:53.733298 kubelet[1885]: I0913 00:47:53.733265 1885 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:47:53.738707 kubelet[1885]: E0913 00:47:53.738607 1885 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:47:53.742824 kubelet[1885]: I0913 00:47:53.742744 1885 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:47:53.745378 kubelet[1885]: I0913 00:47:53.745346 1885 server.go:479] "Adding debug handlers to kubelet server" Sep 13 00:47:53.748753 kubelet[1885]: I0913 00:47:53.748688 1885 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:47:53.749162 kubelet[1885]: I0913 00:47:53.749142 1885 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:47:53.749549 kubelet[1885]: I0913 00:47:53.749520 1885 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:47:53.751401 kubelet[1885]: I0913 00:47:53.751377 1885 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 00:47:53.751647 kubelet[1885]: I0913 00:47:53.751628 1885 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 00:47:53.751917 kubelet[1885]: I0913 00:47:53.751904 1885 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:47:53.752937 kubelet[1885]: I0913 00:47:53.752919 1885 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:47:53.753185 kubelet[1885]: I0913 00:47:53.753152 1885 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:47:53.758199 kubelet[1885]: I0913 00:47:53.758158 1885 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:47:53.764153 kubelet[1885]: I0913 00:47:53.764106 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:47:53.765369 kubelet[1885]: I0913 00:47:53.765336 1885 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:47:53.765570 kubelet[1885]: I0913 00:47:53.765551 1885 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 00:47:53.765683 kubelet[1885]: I0913 00:47:53.765670 1885 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 00:47:53.765759 kubelet[1885]: I0913 00:47:53.765749 1885 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 00:47:53.765983 kubelet[1885]: E0913 00:47:53.765962 1885 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854210 1885 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854231 1885 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854255 1885 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854651 1885 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854666 1885 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854685 1885 policy_none.go:49] "None policy: Start" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854699 1885 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854712 1885 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:47:53.857728 kubelet[1885]: I0913 00:47:53.854822 1885 state_mem.go:75] "Updated machine memory state" Sep 13 00:47:53.863964 kubelet[1885]: I0913 00:47:53.863933 1885 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:47:53.864334 kubelet[1885]: I0913 00:47:53.864312 1885 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:47:53.870664 kubelet[1885]: E0913 00:47:53.866102 1885 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 00:47:53.871825 kubelet[1885]: I0913 00:47:53.870577 1885 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:47:53.872464 kubelet[1885]: I0913 00:47:53.872448 1885 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:47:53.873754 kubelet[1885]: E0913 00:47:53.873728 1885 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 00:47:53.981074 kubelet[1885]: I0913 00:47:53.981034 1885 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:53.997004 kubelet[1885]: I0913 00:47:53.996952 1885 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:53.997325 kubelet[1885]: I0913 00:47:53.997308 1885 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.075476 kubelet[1885]: I0913 00:47:54.075436 1885 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.077846 kubelet[1885]: I0913 00:47:54.077116 1885 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.078320 kubelet[1885]: I0913 00:47:54.077281 1885 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.119935 kubelet[1885]: W0913 00:47:54.119797 1885 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:47:54.120575 kubelet[1885]: W0913 00:47:54.120540 1885 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:47:54.120816 kubelet[1885]: E0913 00:47:54.120765 1885 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.128103 kubelet[1885]: W0913 00:47:54.128062 1885 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:47:54.144467 kubelet[1885]: I0913 00:47:54.144343 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" podStartSLOduration=2.14428509 podStartE2EDuration="2.14428509s" podCreationTimestamp="2025-09-13 00:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:47:54.130894162 +0000 UTC m=+0.568288970" watchObservedRunningTime="2025-09-13 00:47:54.14428509 +0000 UTC m=+0.581679891" Sep 13 00:47:54.152639 kubelet[1885]: I0913 00:47:54.152600 1885 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 00:47:54.154278 kubelet[1885]: I0913 00:47:54.154232 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.154744 kubelet[1885]: I0913 00:47:54.154662 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.154983 kubelet[1885]: I0913 00:47:54.154946 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.155171 kubelet[1885]: I0913 00:47:54.155148 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.155368 kubelet[1885]: I0913 00:47:54.155348 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.155578 kubelet[1885]: I0913 00:47:54.155538 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.155748 kubelet[1885]: I0913 00:47:54.155718 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87c329bc66e8731f53abd330e3cdfad4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-17df7d76e4\" (UID: \"87c329bc66e8731f53abd330e3cdfad4\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.155926 kubelet[1885]: I0913 00:47:54.155890 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f81f27e5a7d8617e51ef26f289f079b-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-17df7d76e4\" (UID: \"3f81f27e5a7d8617e51ef26f289f079b\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.156085 kubelet[1885]: I0913 00:47:54.156066 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c13fcec80446876b10f61b39445a1262-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-17df7d76e4\" (UID: \"c13fcec80446876b10f61b39445a1262\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-17df7d76e4" Sep 13 00:47:54.159722 kubelet[1885]: I0913 00:47:54.159648 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-17df7d76e4" podStartSLOduration=0.159629354 podStartE2EDuration="159.629354ms" podCreationTimestamp="2025-09-13 00:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:47:54.158093616 +0000 UTC m=+0.595488416" watchObservedRunningTime="2025-09-13 00:47:54.159629354 +0000 UTC m=+0.597024153" Sep 13 00:47:54.160097 kubelet[1885]: I0913 00:47:54.160056 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-17df7d76e4" podStartSLOduration=0.160040213 podStartE2EDuration="160.040213ms" podCreationTimestamp="2025-09-13 00:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:47:54.145339196 +0000 UTC m=+0.582733996" watchObservedRunningTime="2025-09-13 00:47:54.160040213 +0000 UTC m=+0.597435013" Sep 13 00:47:54.421644 kubelet[1885]: E0913 00:47:54.421507 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:54.422123 kubelet[1885]: E0913 00:47:54.422090 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:54.429056 kubelet[1885]: E0913 00:47:54.429012 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:54.551535 sudo[1900]: pam_unix(sudo:session): session closed for user root Sep 13 00:47:54.812181 kubelet[1885]: E0913 00:47:54.812064 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:54.814006 kubelet[1885]: E0913 00:47:54.813010 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:54.814006 kubelet[1885]: E0913 00:47:54.813212 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:55.814297 kubelet[1885]: E0913 00:47:55.814246 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:55.815899 kubelet[1885]: E0913 00:47:55.814979 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:55.815899 kubelet[1885]: E0913 00:47:55.815285 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:56.322840 sudo[1291]: pam_unix(sudo:session): session closed for user root Sep 13 00:47:56.327272 sshd[1287]: pam_unix(sshd:session): session closed for user core Sep 13 00:47:56.331613 systemd-logind[1182]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:47:56.332526 systemd[1]: sshd@4-146.190.148.102:22-147.75.109.163:57274.service: Deactivated successfully. Sep 13 00:47:56.333348 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:47:56.333487 systemd[1]: session-5.scope: Consumed 5.307s CPU time. Sep 13 00:47:56.334873 systemd-logind[1182]: Removed session 5. Sep 13 00:47:58.407902 kubelet[1885]: E0913 00:47:58.407839 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:58.630769 kubelet[1885]: I0913 00:47:58.630737 1885 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:47:58.631374 env[1193]: time="2025-09-13T00:47:58.631286883Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:47:58.631874 kubelet[1885]: I0913 00:47:58.631840 1885 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:47:58.820320 kubelet[1885]: E0913 00:47:58.820284 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.414305 systemd[1]: Created slice kubepods-besteffort-pode99afcf0_4c99_44df_b86c_fd16614bf9fc.slice. Sep 13 00:47:59.428586 systemd[1]: Created slice kubepods-burstable-pod3ca9ead0_c9c5_4a4f_b09c_fd481be229f2.slice. Sep 13 00:47:59.492320 kubelet[1885]: I0913 00:47:59.492270 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e99afcf0-4c99-44df-b86c-fd16614bf9fc-xtables-lock\") pod \"kube-proxy-2w9nc\" (UID: \"e99afcf0-4c99-44df-b86c-fd16614bf9fc\") " pod="kube-system/kube-proxy-2w9nc" Sep 13 00:47:59.492981 kubelet[1885]: I0913 00:47:59.492930 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cni-path\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.493195 kubelet[1885]: I0913 00:47:59.493158 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-config-path\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.493354 kubelet[1885]: I0913 00:47:59.493336 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hubble-tls\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.493511 kubelet[1885]: I0913 00:47:59.493487 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-run\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.493654 kubelet[1885]: I0913 00:47:59.493632 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-cgroup\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.493832 kubelet[1885]: I0913 00:47:59.493793 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e99afcf0-4c99-44df-b86c-fd16614bf9fc-lib-modules\") pod \"kube-proxy-2w9nc\" (UID: \"e99afcf0-4c99-44df-b86c-fd16614bf9fc\") " pod="kube-system/kube-proxy-2w9nc" Sep 13 00:47:59.493984 kubelet[1885]: I0913 00:47:59.493965 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-net\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494173 kubelet[1885]: I0913 00:47:59.494135 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-clustermesh-secrets\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494322 kubelet[1885]: I0913 00:47:59.494303 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-lib-modules\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494482 kubelet[1885]: I0913 00:47:59.494446 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-etc-cni-netd\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494599 kubelet[1885]: I0913 00:47:59.494579 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-xtables-lock\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494783 kubelet[1885]: I0913 00:47:59.494749 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-kernel\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.494945 kubelet[1885]: I0913 00:47:59.494927 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e99afcf0-4c99-44df-b86c-fd16614bf9fc-kube-proxy\") pod \"kube-proxy-2w9nc\" (UID: \"e99afcf0-4c99-44df-b86c-fd16614bf9fc\") " pod="kube-system/kube-proxy-2w9nc" Sep 13 00:47:59.495122 kubelet[1885]: I0913 00:47:59.495092 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-bpf-maps\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.495235 kubelet[1885]: I0913 00:47:59.495216 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hostproc\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.495347 kubelet[1885]: I0913 00:47:59.495324 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmbd\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-kube-api-access-9dmbd\") pod \"cilium-6zjrw\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " pod="kube-system/cilium-6zjrw" Sep 13 00:47:59.495448 kubelet[1885]: I0913 00:47:59.495428 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4r9\" (UniqueName: \"kubernetes.io/projected/e99afcf0-4c99-44df-b86c-fd16614bf9fc-kube-api-access-rm4r9\") pod \"kube-proxy-2w9nc\" (UID: \"e99afcf0-4c99-44df-b86c-fd16614bf9fc\") " pod="kube-system/kube-proxy-2w9nc" Sep 13 00:47:59.597025 kubelet[1885]: I0913 00:47:59.596982 1885 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 13 00:47:59.670107 systemd[1]: Created slice kubepods-besteffort-podcdd75caa_aa8b_47ed_9685_e07fa3d84d90.slice. Sep 13 00:47:59.696822 kubelet[1885]: I0913 00:47:59.696763 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kq28\" (UniqueName: \"kubernetes.io/projected/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-kube-api-access-9kq28\") pod \"cilium-operator-6c4d7847fc-tv6fp\" (UID: \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\") " pod="kube-system/cilium-operator-6c4d7847fc-tv6fp" Sep 13 00:47:59.697077 kubelet[1885]: I0913 00:47:59.696833 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tv6fp\" (UID: \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\") " pod="kube-system/cilium-operator-6c4d7847fc-tv6fp" Sep 13 00:47:59.722066 kubelet[1885]: E0913 00:47:59.722006 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.723156 env[1193]: time="2025-09-13T00:47:59.723104927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2w9nc,Uid:e99afcf0-4c99-44df-b86c-fd16614bf9fc,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:59.736316 kubelet[1885]: E0913 00:47:59.736277 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.737184 env[1193]: time="2025-09-13T00:47:59.736905178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zjrw,Uid:3ca9ead0-c9c5-4a4f-b09c-fd481be229f2,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:59.752933 env[1193]: time="2025-09-13T00:47:59.750974950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:59.752933 env[1193]: time="2025-09-13T00:47:59.751038178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:59.752933 env[1193]: time="2025-09-13T00:47:59.751051366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:59.753451 env[1193]: time="2025-09-13T00:47:59.753364454Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ceb1132a37cc2421768bfbc9a21abe40f5886a790fc634bc08fee3eb3a68498e pid=1964 runtime=io.containerd.runc.v2 Sep 13 00:47:59.775578 env[1193]: time="2025-09-13T00:47:59.773741935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:47:59.775578 env[1193]: time="2025-09-13T00:47:59.773842233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:47:59.775578 env[1193]: time="2025-09-13T00:47:59.773881961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:47:59.775578 env[1193]: time="2025-09-13T00:47:59.774074407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8 pid=1984 runtime=io.containerd.runc.v2 Sep 13 00:47:59.775355 systemd[1]: Started cri-containerd-ceb1132a37cc2421768bfbc9a21abe40f5886a790fc634bc08fee3eb3a68498e.scope. Sep 13 00:47:59.812908 kubelet[1885]: E0913 00:47:59.812760 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.815446 systemd[1]: Started cri-containerd-8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8.scope. Sep 13 00:47:59.824695 kubelet[1885]: E0913 00:47:59.824246 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.843532 kubelet[1885]: E0913 00:47:59.843494 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.878715 env[1193]: time="2025-09-13T00:47:59.878669712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zjrw,Uid:3ca9ead0-c9c5-4a4f-b09c-fd481be229f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\"" Sep 13 00:47:59.879448 kubelet[1885]: E0913 00:47:59.879423 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.882317 env[1193]: time="2025-09-13T00:47:59.882276147Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:47:59.934096 env[1193]: time="2025-09-13T00:47:59.934026636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2w9nc,Uid:e99afcf0-4c99-44df-b86c-fd16614bf9fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceb1132a37cc2421768bfbc9a21abe40f5886a790fc634bc08fee3eb3a68498e\"" Sep 13 00:47:59.934799 kubelet[1885]: E0913 00:47:59.934774 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.938793 env[1193]: time="2025-09-13T00:47:59.938748871Z" level=info msg="CreateContainer within sandbox \"ceb1132a37cc2421768bfbc9a21abe40f5886a790fc634bc08fee3eb3a68498e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:47:59.955328 env[1193]: time="2025-09-13T00:47:59.955267222Z" level=info msg="CreateContainer within sandbox \"ceb1132a37cc2421768bfbc9a21abe40f5886a790fc634bc08fee3eb3a68498e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f15c965333cb7f69907d3a8b18ac82a4c27cbca5453fceed4affb323500bc3b6\"" Sep 13 00:47:59.956597 env[1193]: time="2025-09-13T00:47:59.956552523Z" level=info msg="StartContainer for \"f15c965333cb7f69907d3a8b18ac82a4c27cbca5453fceed4affb323500bc3b6\"" Sep 13 00:47:59.974161 kubelet[1885]: E0913 00:47:59.973541 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:47:59.980478 env[1193]: time="2025-09-13T00:47:59.980414892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tv6fp,Uid:cdd75caa-aa8b-47ed-9685-e07fa3d84d90,Namespace:kube-system,Attempt:0,}" Sep 13 00:47:59.989167 systemd[1]: Started cri-containerd-f15c965333cb7f69907d3a8b18ac82a4c27cbca5453fceed4affb323500bc3b6.scope. Sep 13 00:48:00.027081 env[1193]: time="2025-09-13T00:48:00.026755415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:00.027081 env[1193]: time="2025-09-13T00:48:00.026827815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:00.027081 env[1193]: time="2025-09-13T00:48:00.026843772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:00.027678 env[1193]: time="2025-09-13T00:48:00.027544208Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3 pid=2066 runtime=io.containerd.runc.v2 Sep 13 00:48:00.040265 env[1193]: time="2025-09-13T00:48:00.040190148Z" level=info msg="StartContainer for \"f15c965333cb7f69907d3a8b18ac82a4c27cbca5453fceed4affb323500bc3b6\" returns successfully" Sep 13 00:48:00.053428 systemd[1]: Started cri-containerd-73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3.scope. Sep 13 00:48:00.112071 env[1193]: time="2025-09-13T00:48:00.112011414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tv6fp,Uid:cdd75caa-aa8b-47ed-9685-e07fa3d84d90,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\"" Sep 13 00:48:00.113835 kubelet[1885]: E0913 00:48:00.113334 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:00.834332 kubelet[1885]: E0913 00:48:00.834284 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:00.836627 kubelet[1885]: E0913 00:48:00.834350 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:05.384556 kubelet[1885]: E0913 00:48:05.384516 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:05.410981 kubelet[1885]: I0913 00:48:05.409615 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2w9nc" podStartSLOduration=6.409574919 podStartE2EDuration="6.409574919s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:00.852636938 +0000 UTC m=+7.290031736" watchObservedRunningTime="2025-09-13 00:48:05.409574919 +0000 UTC m=+11.846969736" Sep 13 00:48:05.689538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817916405.mount: Deactivated successfully. Sep 13 00:48:09.122663 env[1193]: time="2025-09-13T00:48:09.122541923Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:09.124065 env[1193]: time="2025-09-13T00:48:09.124025079Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:09.125942 env[1193]: time="2025-09-13T00:48:09.125909096Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:09.126904 env[1193]: time="2025-09-13T00:48:09.126847802Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 00:48:09.130011 env[1193]: time="2025-09-13T00:48:09.129786036Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:48:09.131279 env[1193]: time="2025-09-13T00:48:09.131241434Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:48:09.148155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910886780.mount: Deactivated successfully. Sep 13 00:48:09.156915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1307794176.mount: Deactivated successfully. Sep 13 00:48:09.160907 env[1193]: time="2025-09-13T00:48:09.160842989Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\"" Sep 13 00:48:09.163398 env[1193]: time="2025-09-13T00:48:09.163358959Z" level=info msg="StartContainer for \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\"" Sep 13 00:48:09.195039 systemd[1]: Started cri-containerd-5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05.scope. Sep 13 00:48:09.241457 env[1193]: time="2025-09-13T00:48:09.241206255Z" level=info msg="StartContainer for \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\" returns successfully" Sep 13 00:48:09.249676 systemd[1]: cri-containerd-5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05.scope: Deactivated successfully. Sep 13 00:48:09.327736 env[1193]: time="2025-09-13T00:48:09.327671743Z" level=info msg="shim disconnected" id=5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05 Sep 13 00:48:09.328411 update_engine[1187]: I0913 00:48:09.328359 1187 update_attempter.cc:509] Updating boot flags... Sep 13 00:48:09.329114 env[1193]: time="2025-09-13T00:48:09.328598912Z" level=warning msg="cleaning up after shim disconnected" id=5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05 namespace=k8s.io Sep 13 00:48:09.329114 env[1193]: time="2025-09-13T00:48:09.328623471Z" level=info msg="cleaning up dead shim" Sep 13 00:48:09.344156 env[1193]: time="2025-09-13T00:48:09.344105154Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2289 runtime=io.containerd.runc.v2\n" Sep 13 00:48:09.867482 kubelet[1885]: E0913 00:48:09.867434 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:09.871107 env[1193]: time="2025-09-13T00:48:09.870995070Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:48:09.886919 env[1193]: time="2025-09-13T00:48:09.886820466Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\"" Sep 13 00:48:09.887865 env[1193]: time="2025-09-13T00:48:09.887806714Z" level=info msg="StartContainer for \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\"" Sep 13 00:48:09.906121 systemd[1]: Started cri-containerd-92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba.scope. Sep 13 00:48:09.956496 env[1193]: time="2025-09-13T00:48:09.956424290Z" level=info msg="StartContainer for \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\" returns successfully" Sep 13 00:48:09.966290 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:48:09.966538 systemd[1]: Stopped systemd-sysctl.service. Sep 13 00:48:09.967428 systemd[1]: Stopping systemd-sysctl.service... Sep 13 00:48:09.969403 systemd[1]: Starting systemd-sysctl.service... Sep 13 00:48:09.979898 systemd[1]: cri-containerd-92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba.scope: Deactivated successfully. Sep 13 00:48:09.983044 systemd[1]: Finished systemd-sysctl.service. Sep 13 00:48:10.010752 env[1193]: time="2025-09-13T00:48:10.010688652Z" level=info msg="shim disconnected" id=92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba Sep 13 00:48:10.010752 env[1193]: time="2025-09-13T00:48:10.010736705Z" level=warning msg="cleaning up after shim disconnected" id=92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba namespace=k8s.io Sep 13 00:48:10.010752 env[1193]: time="2025-09-13T00:48:10.010746117Z" level=info msg="cleaning up dead shim" Sep 13 00:48:10.020011 env[1193]: time="2025-09-13T00:48:10.019960042Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2372 runtime=io.containerd.runc.v2\n" Sep 13 00:48:10.145423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05-rootfs.mount: Deactivated successfully. Sep 13 00:48:10.281656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3087583139.mount: Deactivated successfully. Sep 13 00:48:10.870763 kubelet[1885]: E0913 00:48:10.870722 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:10.876262 env[1193]: time="2025-09-13T00:48:10.876215313Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:48:10.903559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1542192263.mount: Deactivated successfully. Sep 13 00:48:10.908326 env[1193]: time="2025-09-13T00:48:10.908266760Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\"" Sep 13 00:48:10.908926 env[1193]: time="2025-09-13T00:48:10.908899485Z" level=info msg="StartContainer for \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\"" Sep 13 00:48:10.941195 systemd[1]: Started cri-containerd-f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2.scope. Sep 13 00:48:10.988186 systemd[1]: cri-containerd-f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2.scope: Deactivated successfully. Sep 13 00:48:10.989075 env[1193]: time="2025-09-13T00:48:10.988988202Z" level=info msg="StartContainer for \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\" returns successfully" Sep 13 00:48:11.051022 env[1193]: time="2025-09-13T00:48:11.050968151Z" level=info msg="shim disconnected" id=f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2 Sep 13 00:48:11.051022 env[1193]: time="2025-09-13T00:48:11.051016506Z" level=warning msg="cleaning up after shim disconnected" id=f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2 namespace=k8s.io Sep 13 00:48:11.051022 env[1193]: time="2025-09-13T00:48:11.051026117Z" level=info msg="cleaning up dead shim" Sep 13 00:48:11.068709 env[1193]: time="2025-09-13T00:48:11.068654658Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2431 runtime=io.containerd.runc.v2\n" Sep 13 00:48:11.468004 env[1193]: time="2025-09-13T00:48:11.467939385Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:11.469098 env[1193]: time="2025-09-13T00:48:11.469055739Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:11.470763 env[1193]: time="2025-09-13T00:48:11.470728066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 13 00:48:11.471297 env[1193]: time="2025-09-13T00:48:11.471263291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 00:48:11.475462 env[1193]: time="2025-09-13T00:48:11.475426046Z" level=info msg="CreateContainer within sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:48:11.488171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216187763.mount: Deactivated successfully. Sep 13 00:48:11.494836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456908471.mount: Deactivated successfully. Sep 13 00:48:11.502658 env[1193]: time="2025-09-13T00:48:11.502610679Z" level=info msg="CreateContainer within sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\"" Sep 13 00:48:11.504151 env[1193]: time="2025-09-13T00:48:11.504121662Z" level=info msg="StartContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\"" Sep 13 00:48:11.529005 systemd[1]: Started cri-containerd-03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9.scope. Sep 13 00:48:11.566082 env[1193]: time="2025-09-13T00:48:11.566033716Z" level=info msg="StartContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" returns successfully" Sep 13 00:48:11.874742 kubelet[1885]: E0913 00:48:11.874629 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:11.877607 kubelet[1885]: E0913 00:48:11.877579 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:11.879449 env[1193]: time="2025-09-13T00:48:11.879412113Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:48:11.893357 env[1193]: time="2025-09-13T00:48:11.893292425Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\"" Sep 13 00:48:11.894632 env[1193]: time="2025-09-13T00:48:11.894595034Z" level=info msg="StartContainer for \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\"" Sep 13 00:48:11.915222 kubelet[1885]: I0913 00:48:11.915055 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tv6fp" podStartSLOduration=1.557009777 podStartE2EDuration="12.9150347s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="2025-09-13 00:48:00.114441353 +0000 UTC m=+6.551836144" lastFinishedPulling="2025-09-13 00:48:11.472466288 +0000 UTC m=+17.909861067" observedRunningTime="2025-09-13 00:48:11.914469238 +0000 UTC m=+18.351864038" watchObservedRunningTime="2025-09-13 00:48:11.9150347 +0000 UTC m=+18.352429497" Sep 13 00:48:11.922457 systemd[1]: Started cri-containerd-2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70.scope. Sep 13 00:48:11.988985 env[1193]: time="2025-09-13T00:48:11.988931897Z" level=info msg="StartContainer for \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\" returns successfully" Sep 13 00:48:11.994816 systemd[1]: cri-containerd-2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70.scope: Deactivated successfully. Sep 13 00:48:12.047461 env[1193]: time="2025-09-13T00:48:12.047391568Z" level=info msg="shim disconnected" id=2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70 Sep 13 00:48:12.047945 env[1193]: time="2025-09-13T00:48:12.047910638Z" level=warning msg="cleaning up after shim disconnected" id=2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70 namespace=k8s.io Sep 13 00:48:12.048104 env[1193]: time="2025-09-13T00:48:12.048081231Z" level=info msg="cleaning up dead shim" Sep 13 00:48:12.063208 env[1193]: time="2025-09-13T00:48:12.063150495Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:48:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2525 runtime=io.containerd.runc.v2\n" Sep 13 00:48:12.886038 kubelet[1885]: E0913 00:48:12.885986 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:12.890317 kubelet[1885]: E0913 00:48:12.890280 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:12.902357 env[1193]: time="2025-09-13T00:48:12.902288246Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:48:12.925031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052041898.mount: Deactivated successfully. Sep 13 00:48:12.941616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694402060.mount: Deactivated successfully. Sep 13 00:48:12.946670 env[1193]: time="2025-09-13T00:48:12.946594504Z" level=info msg="CreateContainer within sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\"" Sep 13 00:48:12.947481 env[1193]: time="2025-09-13T00:48:12.947444220Z" level=info msg="StartContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\"" Sep 13 00:48:12.968613 systemd[1]: Started cri-containerd-86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce.scope. Sep 13 00:48:13.016318 env[1193]: time="2025-09-13T00:48:13.016255461Z" level=info msg="StartContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" returns successfully" Sep 13 00:48:13.256138 kubelet[1885]: I0913 00:48:13.255265 1885 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 00:48:13.291515 systemd[1]: Created slice kubepods-burstable-pod36f64599_5291_4417_b8c6_161d8a039be6.slice. Sep 13 00:48:13.304676 systemd[1]: Created slice kubepods-burstable-pod854d7058_dc06_4e5c_b76b_e6c0ba7dcc77.slice. Sep 13 00:48:13.413582 kubelet[1885]: I0913 00:48:13.413538 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n2nf\" (UniqueName: \"kubernetes.io/projected/854d7058-dc06-4e5c-b76b-e6c0ba7dcc77-kube-api-access-6n2nf\") pod \"coredns-668d6bf9bc-gcphj\" (UID: \"854d7058-dc06-4e5c-b76b-e6c0ba7dcc77\") " pod="kube-system/coredns-668d6bf9bc-gcphj" Sep 13 00:48:13.413759 kubelet[1885]: I0913 00:48:13.413611 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/854d7058-dc06-4e5c-b76b-e6c0ba7dcc77-config-volume\") pod \"coredns-668d6bf9bc-gcphj\" (UID: \"854d7058-dc06-4e5c-b76b-e6c0ba7dcc77\") " pod="kube-system/coredns-668d6bf9bc-gcphj" Sep 13 00:48:13.413759 kubelet[1885]: I0913 00:48:13.413634 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36f64599-5291-4417-b8c6-161d8a039be6-config-volume\") pod \"coredns-668d6bf9bc-2ddhd\" (UID: \"36f64599-5291-4417-b8c6-161d8a039be6\") " pod="kube-system/coredns-668d6bf9bc-2ddhd" Sep 13 00:48:13.413759 kubelet[1885]: I0913 00:48:13.413666 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75vcn\" (UniqueName: \"kubernetes.io/projected/36f64599-5291-4417-b8c6-161d8a039be6-kube-api-access-75vcn\") pod \"coredns-668d6bf9bc-2ddhd\" (UID: \"36f64599-5291-4417-b8c6-161d8a039be6\") " pod="kube-system/coredns-668d6bf9bc-2ddhd" Sep 13 00:48:13.598543 kubelet[1885]: E0913 00:48:13.598430 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:13.599770 env[1193]: time="2025-09-13T00:48:13.599727021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ddhd,Uid:36f64599-5291-4417-b8c6-161d8a039be6,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:13.607784 kubelet[1885]: E0913 00:48:13.607749 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:13.609132 env[1193]: time="2025-09-13T00:48:13.609080619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcphj,Uid:854d7058-dc06-4e5c-b76b-e6c0ba7dcc77,Namespace:kube-system,Attempt:0,}" Sep 13 00:48:13.892956 kubelet[1885]: E0913 00:48:13.892824 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:13.930951 kubelet[1885]: I0913 00:48:13.930789 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6zjrw" podStartSLOduration=5.68237274 podStartE2EDuration="14.930757602s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="2025-09-13 00:47:59.880299062 +0000 UTC m=+6.317693841" lastFinishedPulling="2025-09-13 00:48:09.128683908 +0000 UTC m=+15.566078703" observedRunningTime="2025-09-13 00:48:13.926966331 +0000 UTC m=+20.364361160" watchObservedRunningTime="2025-09-13 00:48:13.930757602 +0000 UTC m=+20.368152404" Sep 13 00:48:14.894268 kubelet[1885]: E0913 00:48:14.894205 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:15.593658 systemd-networkd[1003]: cilium_host: Link UP Sep 13 00:48:15.595012 systemd-networkd[1003]: cilium_net: Link UP Sep 13 00:48:15.596629 systemd-networkd[1003]: cilium_net: Gained carrier Sep 13 00:48:15.597255 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 13 00:48:15.597773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 13 00:48:15.597491 systemd-networkd[1003]: cilium_host: Gained carrier Sep 13 00:48:15.779157 systemd-networkd[1003]: cilium_vxlan: Link UP Sep 13 00:48:15.779171 systemd-networkd[1003]: cilium_vxlan: Gained carrier Sep 13 00:48:15.896516 kubelet[1885]: E0913 00:48:15.896354 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:16.048145 systemd-networkd[1003]: cilium_host: Gained IPv6LL Sep 13 00:48:16.239894 kernel: NET: Registered PF_ALG protocol family Sep 13 00:48:16.488037 systemd-networkd[1003]: cilium_net: Gained IPv6LL Sep 13 00:48:17.028794 systemd-networkd[1003]: lxc_health: Link UP Sep 13 00:48:17.036531 systemd-networkd[1003]: lxc_health: Gained carrier Sep 13 00:48:17.036922 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:48:17.181604 systemd-networkd[1003]: lxc94b115e5532a: Link UP Sep 13 00:48:17.188886 kernel: eth0: renamed from tmpd1ad8 Sep 13 00:48:17.193522 systemd-networkd[1003]: lxc94b115e5532a: Gained carrier Sep 13 00:48:17.193882 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc94b115e5532a: link becomes ready Sep 13 00:48:17.256059 systemd-networkd[1003]: cilium_vxlan: Gained IPv6LL Sep 13 00:48:17.654532 systemd-networkd[1003]: lxc42619a64c91f: Link UP Sep 13 00:48:17.662887 kernel: eth0: renamed from tmp903e9 Sep 13 00:48:17.668089 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc42619a64c91f: link becomes ready Sep 13 00:48:17.667765 systemd-networkd[1003]: lxc42619a64c91f: Gained carrier Sep 13 00:48:17.740016 kubelet[1885]: E0913 00:48:17.739635 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:17.900705 kubelet[1885]: E0913 00:48:17.900656 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:18.288454 systemd-networkd[1003]: lxc94b115e5532a: Gained IPv6LL Sep 13 00:48:18.856103 systemd-networkd[1003]: lxc_health: Gained IPv6LL Sep 13 00:48:18.902434 kubelet[1885]: E0913 00:48:18.902392 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:18.920141 systemd-networkd[1003]: lxc42619a64c91f: Gained IPv6LL Sep 13 00:48:21.683999 env[1193]: time="2025-09-13T00:48:21.683672981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:21.683999 env[1193]: time="2025-09-13T00:48:21.683723566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:21.683999 env[1193]: time="2025-09-13T00:48:21.683739051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:21.683999 env[1193]: time="2025-09-13T00:48:21.683920643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/903e98e0c067f42e05e2475b72750aaa710c08e6dee656138c91dc3e25fe1161 pid=3090 runtime=io.containerd.runc.v2 Sep 13 00:48:21.692191 env[1193]: time="2025-09-13T00:48:21.692084900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:48:21.692339 env[1193]: time="2025-09-13T00:48:21.692211036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:48:21.692339 env[1193]: time="2025-09-13T00:48:21.692239059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:48:21.699888 env[1193]: time="2025-09-13T00:48:21.694952167Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1 pid=3092 runtime=io.containerd.runc.v2 Sep 13 00:48:21.709647 systemd[1]: Started cri-containerd-903e98e0c067f42e05e2475b72750aaa710c08e6dee656138c91dc3e25fe1161.scope. Sep 13 00:48:21.746333 systemd[1]: Started cri-containerd-d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1.scope. Sep 13 00:48:21.810057 env[1193]: time="2025-09-13T00:48:21.810011093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcphj,Uid:854d7058-dc06-4e5c-b76b-e6c0ba7dcc77,Namespace:kube-system,Attempt:0,} returns sandbox id \"903e98e0c067f42e05e2475b72750aaa710c08e6dee656138c91dc3e25fe1161\"" Sep 13 00:48:21.811547 kubelet[1885]: E0913 00:48:21.811035 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:21.815847 env[1193]: time="2025-09-13T00:48:21.814999748Z" level=info msg="CreateContainer within sandbox \"903e98e0c067f42e05e2475b72750aaa710c08e6dee656138c91dc3e25fe1161\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:48:21.843283 env[1193]: time="2025-09-13T00:48:21.843240567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2ddhd,Uid:36f64599-5291-4417-b8c6-161d8a039be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1\"" Sep 13 00:48:21.846316 kubelet[1885]: E0913 00:48:21.846131 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:21.849814 env[1193]: time="2025-09-13T00:48:21.847944440Z" level=info msg="CreateContainer within sandbox \"903e98e0c067f42e05e2475b72750aaa710c08e6dee656138c91dc3e25fe1161\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f40eaf388013a116b5068c7a52b1683ed6082fdcb6d6c48fb5fedd41a106eee7\"" Sep 13 00:48:21.850740 env[1193]: time="2025-09-13T00:48:21.850702452Z" level=info msg="StartContainer for \"f40eaf388013a116b5068c7a52b1683ed6082fdcb6d6c48fb5fedd41a106eee7\"" Sep 13 00:48:21.851944 env[1193]: time="2025-09-13T00:48:21.851909845Z" level=info msg="CreateContainer within sandbox \"d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:48:21.870623 env[1193]: time="2025-09-13T00:48:21.870573998Z" level=info msg="CreateContainer within sandbox \"d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c7480c3bdfa4929677838ead67ef061253057c57832d8033676b2ac2e4a6e99\"" Sep 13 00:48:21.871661 env[1193]: time="2025-09-13T00:48:21.871625804Z" level=info msg="StartContainer for \"4c7480c3bdfa4929677838ead67ef061253057c57832d8033676b2ac2e4a6e99\"" Sep 13 00:48:21.894290 systemd[1]: Started cri-containerd-f40eaf388013a116b5068c7a52b1683ed6082fdcb6d6c48fb5fedd41a106eee7.scope. Sep 13 00:48:21.929092 systemd[1]: Started cri-containerd-4c7480c3bdfa4929677838ead67ef061253057c57832d8033676b2ac2e4a6e99.scope. Sep 13 00:48:21.997267 env[1193]: time="2025-09-13T00:48:21.995778711Z" level=info msg="StartContainer for \"4c7480c3bdfa4929677838ead67ef061253057c57832d8033676b2ac2e4a6e99\" returns successfully" Sep 13 00:48:22.000723 env[1193]: time="2025-09-13T00:48:22.000663660Z" level=info msg="StartContainer for \"f40eaf388013a116b5068c7a52b1683ed6082fdcb6d6c48fb5fedd41a106eee7\" returns successfully" Sep 13 00:48:22.691912 systemd[1]: run-containerd-runc-k8s.io-d1ad86a88e4ea7c9aabc6e870a1c122f6f2f14c3315d81c118268621963b7bd1-runc.2lo3yX.mount: Deactivated successfully. Sep 13 00:48:22.917299 kubelet[1885]: E0913 00:48:22.917249 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:22.919995 kubelet[1885]: E0913 00:48:22.919956 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:22.932805 kubelet[1885]: I0913 00:48:22.932739 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gcphj" podStartSLOduration=23.932716398 podStartE2EDuration="23.932716398s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:22.931069845 +0000 UTC m=+29.368464644" watchObservedRunningTime="2025-09-13 00:48:22.932716398 +0000 UTC m=+29.370111197" Sep 13 00:48:22.948838 kubelet[1885]: I0913 00:48:22.948698 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2ddhd" podStartSLOduration=23.94859612 podStartE2EDuration="23.94859612s" podCreationTimestamp="2025-09-13 00:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:48:22.944519999 +0000 UTC m=+29.381914798" watchObservedRunningTime="2025-09-13 00:48:22.94859612 +0000 UTC m=+29.385990920" Sep 13 00:48:23.921976 kubelet[1885]: E0913 00:48:23.921934 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:23.922731 kubelet[1885]: E0913 00:48:23.922708 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:24.924422 kubelet[1885]: E0913 00:48:24.924315 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:24.924422 kubelet[1885]: E0913 00:48:24.924355 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:48:32.272954 systemd[1]: Started sshd@5-146.190.148.102:22-147.75.109.163:41776.service. Sep 13 00:48:32.330138 sshd[3243]: Accepted publickey for core from 147.75.109.163 port 41776 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:32.334304 sshd[3243]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:32.341268 systemd[1]: Started session-6.scope. Sep 13 00:48:32.341972 systemd-logind[1182]: New session 6 of user core. Sep 13 00:48:32.561295 sshd[3243]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:32.564578 systemd[1]: sshd@5-146.190.148.102:22-147.75.109.163:41776.service: Deactivated successfully. Sep 13 00:48:32.565337 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:48:32.565967 systemd-logind[1182]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:48:32.566831 systemd-logind[1182]: Removed session 6. Sep 13 00:48:37.570355 systemd[1]: Started sshd@6-146.190.148.102:22-147.75.109.163:41782.service. Sep 13 00:48:37.617530 sshd[3256]: Accepted publickey for core from 147.75.109.163 port 41782 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:37.620061 sshd[3256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:37.626817 systemd[1]: Started session-7.scope. Sep 13 00:48:37.627974 systemd-logind[1182]: New session 7 of user core. Sep 13 00:48:37.808612 sshd[3256]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:37.812229 systemd[1]: sshd@6-146.190.148.102:22-147.75.109.163:41782.service: Deactivated successfully. Sep 13 00:48:37.813371 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:48:37.814901 systemd-logind[1182]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:48:37.815881 systemd-logind[1182]: Removed session 7. Sep 13 00:48:42.814140 systemd[1]: Started sshd@7-146.190.148.102:22-147.75.109.163:51224.service. Sep 13 00:48:42.867283 sshd[3268]: Accepted publickey for core from 147.75.109.163 port 51224 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:42.868438 sshd[3268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:42.873825 systemd-logind[1182]: New session 8 of user core. Sep 13 00:48:42.874134 systemd[1]: Started session-8.scope. Sep 13 00:48:43.023611 sshd[3268]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:43.027234 systemd-logind[1182]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:48:43.027715 systemd[1]: sshd@7-146.190.148.102:22-147.75.109.163:51224.service: Deactivated successfully. Sep 13 00:48:43.029382 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:48:43.030915 systemd-logind[1182]: Removed session 8. Sep 13 00:48:48.031238 systemd[1]: Started sshd@8-146.190.148.102:22-147.75.109.163:51234.service. Sep 13 00:48:48.078325 sshd[3283]: Accepted publickey for core from 147.75.109.163 port 51234 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:48.080315 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:48.086667 systemd-logind[1182]: New session 9 of user core. Sep 13 00:48:48.086695 systemd[1]: Started session-9.scope. Sep 13 00:48:48.224501 sshd[3283]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:48.228062 systemd-logind[1182]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:48:48.228239 systemd[1]: sshd@8-146.190.148.102:22-147.75.109.163:51234.service: Deactivated successfully. Sep 13 00:48:48.229148 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:48:48.230141 systemd-logind[1182]: Removed session 9. Sep 13 00:48:53.233019 systemd[1]: Started sshd@9-146.190.148.102:22-147.75.109.163:46146.service. Sep 13 00:48:53.301136 sshd[3297]: Accepted publickey for core from 147.75.109.163 port 46146 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:53.303804 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:53.310406 systemd[1]: Started session-10.scope. Sep 13 00:48:53.311157 systemd-logind[1182]: New session 10 of user core. Sep 13 00:48:53.462222 sshd[3297]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:53.469774 systemd[1]: Started sshd@10-146.190.148.102:22-147.75.109.163:46156.service. Sep 13 00:48:53.470805 systemd[1]: sshd@9-146.190.148.102:22-147.75.109.163:46146.service: Deactivated successfully. Sep 13 00:48:53.471967 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:48:53.473679 systemd-logind[1182]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:48:53.475719 systemd-logind[1182]: Removed session 10. Sep 13 00:48:53.525022 sshd[3309]: Accepted publickey for core from 147.75.109.163 port 46156 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:53.527796 sshd[3309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:53.534172 systemd-logind[1182]: New session 11 of user core. Sep 13 00:48:53.535271 systemd[1]: Started session-11.scope. Sep 13 00:48:53.827437 sshd[3309]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:53.831228 systemd[1]: Started sshd@11-146.190.148.102:22-147.75.109.163:46172.service. Sep 13 00:48:53.835390 systemd[1]: sshd@10-146.190.148.102:22-147.75.109.163:46156.service: Deactivated successfully. Sep 13 00:48:53.841262 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:48:53.845000 systemd-logind[1182]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:48:53.846681 systemd-logind[1182]: Removed session 11. Sep 13 00:48:53.918947 sshd[3321]: Accepted publickey for core from 147.75.109.163 port 46172 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:53.922027 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:53.929023 systemd-logind[1182]: New session 12 of user core. Sep 13 00:48:53.929768 systemd[1]: Started session-12.scope. Sep 13 00:48:54.123262 sshd[3321]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:54.127158 systemd-logind[1182]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:48:54.127210 systemd[1]: sshd@11-146.190.148.102:22-147.75.109.163:46172.service: Deactivated successfully. Sep 13 00:48:54.128127 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:48:54.129808 systemd-logind[1182]: Removed session 12. Sep 13 00:48:59.131693 systemd[1]: Started sshd@12-146.190.148.102:22-147.75.109.163:46184.service. Sep 13 00:48:59.180062 sshd[3334]: Accepted publickey for core from 147.75.109.163 port 46184 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:48:59.181425 sshd[3334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:48:59.188183 systemd[1]: Started session-13.scope. Sep 13 00:48:59.188796 systemd-logind[1182]: New session 13 of user core. Sep 13 00:48:59.325074 sshd[3334]: pam_unix(sshd:session): session closed for user core Sep 13 00:48:59.328358 systemd[1]: sshd@12-146.190.148.102:22-147.75.109.163:46184.service: Deactivated successfully. Sep 13 00:48:59.329038 systemd-logind[1182]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:48:59.329105 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:48:59.332896 systemd-logind[1182]: Removed session 13. Sep 13 00:49:04.333336 systemd[1]: Started sshd@13-146.190.148.102:22-147.75.109.163:32862.service. Sep 13 00:49:04.379944 sshd[3349]: Accepted publickey for core from 147.75.109.163 port 32862 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:04.382169 sshd[3349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:04.389618 systemd-logind[1182]: New session 14 of user core. Sep 13 00:49:04.390024 systemd[1]: Started session-14.scope. Sep 13 00:49:04.536149 sshd[3349]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:04.540463 systemd-logind[1182]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:49:04.540773 systemd[1]: sshd@13-146.190.148.102:22-147.75.109.163:32862.service: Deactivated successfully. Sep 13 00:49:04.541961 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:49:04.543424 systemd-logind[1182]: Removed session 14. Sep 13 00:49:09.544806 systemd[1]: Started sshd@14-146.190.148.102:22-147.75.109.163:32878.service. Sep 13 00:49:09.600690 sshd[3361]: Accepted publickey for core from 147.75.109.163 port 32878 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:09.602930 sshd[3361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:09.608286 systemd[1]: Started session-15.scope. Sep 13 00:49:09.609041 systemd-logind[1182]: New session 15 of user core. Sep 13 00:49:09.757291 sshd[3361]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:09.765544 systemd[1]: Started sshd@15-146.190.148.102:22-147.75.109.163:32884.service. Sep 13 00:49:09.766395 systemd[1]: sshd@14-146.190.148.102:22-147.75.109.163:32878.service: Deactivated successfully. Sep 13 00:49:09.770194 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:49:09.771671 systemd-logind[1182]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:49:09.773310 systemd-logind[1182]: Removed session 15. Sep 13 00:49:09.819486 sshd[3372]: Accepted publickey for core from 147.75.109.163 port 32884 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:09.820911 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:09.829014 systemd[1]: Started session-16.scope. Sep 13 00:49:09.829723 systemd-logind[1182]: New session 16 of user core. Sep 13 00:49:10.252805 sshd[3372]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:10.259660 systemd[1]: Started sshd@16-146.190.148.102:22-147.75.109.163:50058.service. Sep 13 00:49:10.263982 systemd[1]: sshd@15-146.190.148.102:22-147.75.109.163:32884.service: Deactivated successfully. Sep 13 00:49:10.266362 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:49:10.268629 systemd-logind[1182]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:49:10.270958 systemd-logind[1182]: Removed session 16. Sep 13 00:49:10.317130 sshd[3382]: Accepted publickey for core from 147.75.109.163 port 50058 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:10.319769 sshd[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:10.326514 systemd-logind[1182]: New session 17 of user core. Sep 13 00:49:10.327841 systemd[1]: Started session-17.scope. Sep 13 00:49:11.152690 sshd[3382]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:11.161552 systemd[1]: Started sshd@17-146.190.148.102:22-147.75.109.163:50060.service. Sep 13 00:49:11.162575 systemd[1]: sshd@16-146.190.148.102:22-147.75.109.163:50058.service: Deactivated successfully. Sep 13 00:49:11.164590 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:49:11.165003 systemd-logind[1182]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:49:11.167576 systemd-logind[1182]: Removed session 17. Sep 13 00:49:11.269459 sshd[3397]: Accepted publickey for core from 147.75.109.163 port 50060 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:11.271733 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:11.278202 systemd-logind[1182]: New session 18 of user core. Sep 13 00:49:11.278752 systemd[1]: Started session-18.scope. Sep 13 00:49:11.601960 sshd[3397]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:11.607889 systemd[1]: sshd@17-146.190.148.102:22-147.75.109.163:50060.service: Deactivated successfully. Sep 13 00:49:11.609446 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:49:11.610496 systemd-logind[1182]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:49:11.612687 systemd[1]: Started sshd@18-146.190.148.102:22-147.75.109.163:50066.service. Sep 13 00:49:11.620253 systemd-logind[1182]: Removed session 18. Sep 13 00:49:11.665561 sshd[3410]: Accepted publickey for core from 147.75.109.163 port 50066 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:11.668352 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:11.676091 systemd[1]: Started session-19.scope. Sep 13 00:49:11.676752 systemd-logind[1182]: New session 19 of user core. Sep 13 00:49:11.839165 sshd[3410]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:11.842336 systemd-logind[1182]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:49:11.842631 systemd[1]: sshd@18-146.190.148.102:22-147.75.109.163:50066.service: Deactivated successfully. Sep 13 00:49:11.843439 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:49:11.844527 systemd-logind[1182]: Removed session 19. Sep 13 00:49:15.777571 kubelet[1885]: E0913 00:49:15.777525 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:16.846197 systemd[1]: Started sshd@19-146.190.148.102:22-147.75.109.163:50072.service. Sep 13 00:49:16.893537 sshd[3422]: Accepted publickey for core from 147.75.109.163 port 50072 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:16.896168 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:16.902834 systemd[1]: Started session-20.scope. Sep 13 00:49:16.904173 systemd-logind[1182]: New session 20 of user core. Sep 13 00:49:17.045008 sshd[3422]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:17.048053 systemd-logind[1182]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:49:17.049545 systemd[1]: sshd@19-146.190.148.102:22-147.75.109.163:50072.service: Deactivated successfully. Sep 13 00:49:17.050360 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:49:17.051617 systemd-logind[1182]: Removed session 20. Sep 13 00:49:17.767354 kubelet[1885]: E0913 00:49:17.767314 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:22.053113 systemd[1]: Started sshd@20-146.190.148.102:22-147.75.109.163:36158.service. Sep 13 00:49:22.100432 sshd[3437]: Accepted publickey for core from 147.75.109.163 port 36158 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:22.102206 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:22.109334 systemd[1]: Started session-21.scope. Sep 13 00:49:22.110089 systemd-logind[1182]: New session 21 of user core. Sep 13 00:49:22.278432 sshd[3437]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:22.281435 systemd[1]: sshd@20-146.190.148.102:22-147.75.109.163:36158.service: Deactivated successfully. Sep 13 00:49:22.282269 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:49:22.282991 systemd-logind[1182]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:49:22.283850 systemd-logind[1182]: Removed session 21. Sep 13 00:49:24.767074 kubelet[1885]: E0913 00:49:24.767032 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:27.285710 systemd[1]: Started sshd@21-146.190.148.102:22-147.75.109.163:36174.service. Sep 13 00:49:27.335565 sshd[3449]: Accepted publickey for core from 147.75.109.163 port 36174 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:27.337462 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:27.343269 systemd[1]: Started session-22.scope. Sep 13 00:49:27.343794 systemd-logind[1182]: New session 22 of user core. Sep 13 00:49:27.474101 sshd[3449]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:27.477657 systemd[1]: sshd@21-146.190.148.102:22-147.75.109.163:36174.service: Deactivated successfully. Sep 13 00:49:27.478428 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:49:27.479709 systemd-logind[1182]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:49:27.480684 systemd-logind[1182]: Removed session 22. Sep 13 00:49:27.767517 kubelet[1885]: E0913 00:49:27.767448 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:29.767534 kubelet[1885]: E0913 00:49:29.767483 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:30.767476 kubelet[1885]: E0913 00:49:30.767419 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:32.487975 systemd[1]: Started sshd@22-146.190.148.102:22-147.75.109.163:44156.service. Sep 13 00:49:32.538780 sshd[3462]: Accepted publickey for core from 147.75.109.163 port 44156 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:32.541006 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:32.551300 systemd-logind[1182]: New session 23 of user core. Sep 13 00:49:32.553239 systemd[1]: Started session-23.scope. Sep 13 00:49:32.716271 sshd[3462]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:32.719492 systemd-logind[1182]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:49:32.719946 systemd[1]: sshd@22-146.190.148.102:22-147.75.109.163:44156.service: Deactivated successfully. Sep 13 00:49:32.721057 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:49:32.722220 systemd-logind[1182]: Removed session 23. Sep 13 00:49:37.723063 systemd[1]: Started sshd@23-146.190.148.102:22-147.75.109.163:44158.service. Sep 13 00:49:37.770276 sshd[3473]: Accepted publickey for core from 147.75.109.163 port 44158 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:37.772201 sshd[3473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:37.779558 systemd-logind[1182]: New session 24 of user core. Sep 13 00:49:37.779597 systemd[1]: Started session-24.scope. Sep 13 00:49:37.922728 sshd[3473]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:37.925760 systemd-logind[1182]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:49:37.925937 systemd[1]: sshd@23-146.190.148.102:22-147.75.109.163:44158.service: Deactivated successfully. Sep 13 00:49:37.926686 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:49:37.927525 systemd-logind[1182]: Removed session 24. Sep 13 00:49:42.766791 kubelet[1885]: E0913 00:49:42.766747 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:42.929592 systemd[1]: Started sshd@24-146.190.148.102:22-147.75.109.163:35766.service. Sep 13 00:49:42.972533 sshd[3484]: Accepted publickey for core from 147.75.109.163 port 35766 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:42.974324 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:42.979626 systemd[1]: Started session-25.scope. Sep 13 00:49:42.979987 systemd-logind[1182]: New session 25 of user core. Sep 13 00:49:43.115162 sshd[3484]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:43.121864 systemd[1]: Started sshd@25-146.190.148.102:22-147.75.109.163:35772.service. Sep 13 00:49:43.122804 systemd[1]: sshd@24-146.190.148.102:22-147.75.109.163:35766.service: Deactivated successfully. Sep 13 00:49:43.124189 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:49:43.125122 systemd-logind[1182]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:49:43.126433 systemd-logind[1182]: Removed session 25. Sep 13 00:49:43.167922 sshd[3494]: Accepted publickey for core from 147.75.109.163 port 35772 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:43.170567 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:43.177899 systemd-logind[1182]: New session 26 of user core. Sep 13 00:49:43.179086 systemd[1]: Started session-26.scope. Sep 13 00:49:44.660839 systemd[1]: run-containerd-runc-k8s.io-86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce-runc.Egc6aq.mount: Deactivated successfully. Sep 13 00:49:44.681556 env[1193]: time="2025-09-13T00:49:44.681515583Z" level=info msg="StopContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" with timeout 30 (s)" Sep 13 00:49:44.682362 env[1193]: time="2025-09-13T00:49:44.682328269Z" level=info msg="Stop container \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" with signal terminated" Sep 13 00:49:44.690956 env[1193]: time="2025-09-13T00:49:44.690562708Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:49:44.696096 systemd[1]: cri-containerd-03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9.scope: Deactivated successfully. Sep 13 00:49:44.700893 env[1193]: time="2025-09-13T00:49:44.700838727Z" level=info msg="StopContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" with timeout 2 (s)" Sep 13 00:49:44.701519 env[1193]: time="2025-09-13T00:49:44.701483691Z" level=info msg="Stop container \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" with signal terminated" Sep 13 00:49:44.711955 systemd-networkd[1003]: lxc_health: Link DOWN Sep 13 00:49:44.711964 systemd-networkd[1003]: lxc_health: Lost carrier Sep 13 00:49:44.736705 systemd[1]: cri-containerd-86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce.scope: Deactivated successfully. Sep 13 00:49:44.737091 systemd[1]: cri-containerd-86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce.scope: Consumed 7.759s CPU time. Sep 13 00:49:44.745333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9-rootfs.mount: Deactivated successfully. Sep 13 00:49:44.751711 env[1193]: time="2025-09-13T00:49:44.751662155Z" level=info msg="shim disconnected" id=03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9 Sep 13 00:49:44.752084 env[1193]: time="2025-09-13T00:49:44.752052953Z" level=warning msg="cleaning up after shim disconnected" id=03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9 namespace=k8s.io Sep 13 00:49:44.752246 env[1193]: time="2025-09-13T00:49:44.752223365Z" level=info msg="cleaning up dead shim" Sep 13 00:49:44.761568 env[1193]: time="2025-09-13T00:49:44.761523683Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3557 runtime=io.containerd.runc.v2\n" Sep 13 00:49:44.764418 env[1193]: time="2025-09-13T00:49:44.764350910Z" level=info msg="StopContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" returns successfully" Sep 13 00:49:44.765776 env[1193]: time="2025-09-13T00:49:44.765693566Z" level=info msg="StopPodSandbox for \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\"" Sep 13 00:49:44.765776 env[1193]: time="2025-09-13T00:49:44.765762784Z" level=info msg="Container to stop \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.767923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3-shm.mount: Deactivated successfully. Sep 13 00:49:44.774830 env[1193]: time="2025-09-13T00:49:44.774776304Z" level=info msg="shim disconnected" id=86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce Sep 13 00:49:44.774830 env[1193]: time="2025-09-13T00:49:44.774822616Z" level=warning msg="cleaning up after shim disconnected" id=86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce namespace=k8s.io Sep 13 00:49:44.774830 env[1193]: time="2025-09-13T00:49:44.774831657Z" level=info msg="cleaning up dead shim" Sep 13 00:49:44.781241 systemd[1]: cri-containerd-73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3.scope: Deactivated successfully. Sep 13 00:49:44.792756 env[1193]: time="2025-09-13T00:49:44.792709414Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3582 runtime=io.containerd.runc.v2\n" Sep 13 00:49:44.794467 env[1193]: time="2025-09-13T00:49:44.794416466Z" level=info msg="StopContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" returns successfully" Sep 13 00:49:44.795273 env[1193]: time="2025-09-13T00:49:44.795239178Z" level=info msg="StopPodSandbox for \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\"" Sep 13 00:49:44.795380 env[1193]: time="2025-09-13T00:49:44.795317063Z" level=info msg="Container to stop \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.795380 env[1193]: time="2025-09-13T00:49:44.795340181Z" level=info msg="Container to stop \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.795380 env[1193]: time="2025-09-13T00:49:44.795359169Z" level=info msg="Container to stop \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.795380 env[1193]: time="2025-09-13T00:49:44.795376987Z" level=info msg="Container to stop \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.795519 env[1193]: time="2025-09-13T00:49:44.795388722Z" level=info msg="Container to stop \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:49:44.801561 systemd[1]: cri-containerd-8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8.scope: Deactivated successfully. Sep 13 00:49:44.820635 env[1193]: time="2025-09-13T00:49:44.820570331Z" level=info msg="shim disconnected" id=73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3 Sep 13 00:49:44.820635 env[1193]: time="2025-09-13T00:49:44.820619817Z" level=warning msg="cleaning up after shim disconnected" id=73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3 namespace=k8s.io Sep 13 00:49:44.820635 env[1193]: time="2025-09-13T00:49:44.820629573Z" level=info msg="cleaning up dead shim" Sep 13 00:49:44.837429 env[1193]: time="2025-09-13T00:49:44.837368809Z" level=info msg="shim disconnected" id=8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8 Sep 13 00:49:44.837974 env[1193]: time="2025-09-13T00:49:44.837940533Z" level=warning msg="cleaning up after shim disconnected" id=8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8 namespace=k8s.io Sep 13 00:49:44.838168 env[1193]: time="2025-09-13T00:49:44.838146419Z" level=info msg="cleaning up dead shim" Sep 13 00:49:44.838467 env[1193]: time="2025-09-13T00:49:44.838430599Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3621 runtime=io.containerd.runc.v2\n" Sep 13 00:49:44.839047 env[1193]: time="2025-09-13T00:49:44.838831309Z" level=info msg="TearDown network for sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" successfully" Sep 13 00:49:44.839047 env[1193]: time="2025-09-13T00:49:44.838931673Z" level=info msg="StopPodSandbox for \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" returns successfully" Sep 13 00:49:44.876000 env[1193]: time="2025-09-13T00:49:44.875945715Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3641 runtime=io.containerd.runc.v2\n" Sep 13 00:49:44.876299 env[1193]: time="2025-09-13T00:49:44.876270871Z" level=info msg="TearDown network for sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" successfully" Sep 13 00:49:44.876359 env[1193]: time="2025-09-13T00:49:44.876298455Z" level=info msg="StopPodSandbox for \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" returns successfully" Sep 13 00:49:44.985627 kubelet[1885]: I0913 00:49:44.985575 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-config-path\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986247 kubelet[1885]: I0913 00:49:44.986213 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-cilium-config-path\") pod \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\" (UID: \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\") " Sep 13 00:49:44.986363 kubelet[1885]: I0913 00:49:44.986348 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hubble-tls\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986456 kubelet[1885]: I0913 00:49:44.986443 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-lib-modules\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986545 kubelet[1885]: I0913 00:49:44.986532 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-kernel\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986647 kubelet[1885]: I0913 00:49:44.986634 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-bpf-maps\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986737 kubelet[1885]: I0913 00:49:44.986726 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cni-path\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986816 kubelet[1885]: I0913 00:49:44.986805 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-cgroup\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.986956 kubelet[1885]: I0913 00:49:44.986942 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hostproc\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987079 kubelet[1885]: I0913 00:49:44.987060 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dmbd\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-kube-api-access-9dmbd\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987217 kubelet[1885]: I0913 00:49:44.987202 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-etc-cni-netd\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987329 kubelet[1885]: I0913 00:49:44.987312 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kq28\" (UniqueName: \"kubernetes.io/projected/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-kube-api-access-9kq28\") pod \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\" (UID: \"cdd75caa-aa8b-47ed-9685-e07fa3d84d90\") " Sep 13 00:49:44.987411 kubelet[1885]: I0913 00:49:44.987398 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-clustermesh-secrets\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987515 kubelet[1885]: I0913 00:49:44.987501 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-xtables-lock\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987705 kubelet[1885]: I0913 00:49:44.987691 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-net\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.987810 kubelet[1885]: I0913 00:49:44.987794 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-run\") pod \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\" (UID: \"3ca9ead0-c9c5-4a4f-b09c-fd481be229f2\") " Sep 13 00:49:44.990712 kubelet[1885]: I0913 00:49:44.989555 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:44.990907 kubelet[1885]: I0913 00:49:44.989575 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:44.991065 kubelet[1885]: I0913 00:49:44.991042 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:44.991177 kubelet[1885]: I0913 00:49:44.991159 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:44.995515 kubelet[1885]: I0913 00:49:44.995469 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:49:44.998224 kubelet[1885]: I0913 00:49:44.997725 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cdd75caa-aa8b-47ed-9685-e07fa3d84d90" (UID: "cdd75caa-aa8b-47ed-9685-e07fa3d84d90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:49:44.999563 kubelet[1885]: I0913 00:49:44.999520 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-kube-api-access-9dmbd" (OuterVolumeSpecName: "kube-api-access-9dmbd") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "kube-api-access-9dmbd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:44.999727 kubelet[1885]: I0913 00:49:44.999709 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.001421 kubelet[1885]: I0913 00:49:45.001385 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:45.001520 kubelet[1885]: I0913 00:49:45.001433 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.001520 kubelet[1885]: I0913 00:49:45.001452 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.001520 kubelet[1885]: I0913 00:49:45.001474 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.001520 kubelet[1885]: I0913 00:49:45.001499 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.003013 kubelet[1885]: I0913 00:49:45.002984 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-kube-api-access-9kq28" (OuterVolumeSpecName: "kube-api-access-9kq28") pod "cdd75caa-aa8b-47ed-9685-e07fa3d84d90" (UID: "cdd75caa-aa8b-47ed-9685-e07fa3d84d90"). InnerVolumeSpecName "kube-api-access-9kq28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:45.003156 kubelet[1885]: I0913 00:49:45.003137 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:45.004161 kubelet[1885]: I0913 00:49:45.004128 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" (UID: "3ca9ead0-c9c5-4a4f-b09c-fd481be229f2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:45.088740 kubelet[1885]: I0913 00:49:45.088660 1885 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-etc-cni-netd\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.088740 kubelet[1885]: I0913 00:49:45.088724 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9kq28\" (UniqueName: \"kubernetes.io/projected/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-kube-api-access-9kq28\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.088740 kubelet[1885]: I0913 00:49:45.088736 1885 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-clustermesh-secrets\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.088740 kubelet[1885]: I0913 00:49:45.088749 1885 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-xtables-lock\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.088740 kubelet[1885]: I0913 00:49:45.088760 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-net\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088770 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-run\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088779 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-config-path\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088788 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdd75caa-aa8b-47ed-9685-e07fa3d84d90-cilium-config-path\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088796 1885 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cni-path\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088806 1885 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hubble-tls\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088821 1885 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-lib-modules\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088829 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089143 kubelet[1885]: I0913 00:49:45.088838 1885 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-bpf-maps\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089762 kubelet[1885]: I0913 00:49:45.088847 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-cilium-cgroup\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089762 kubelet[1885]: I0913 00:49:45.088878 1885 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-hostproc\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.089762 kubelet[1885]: I0913 00:49:45.088887 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9dmbd\" (UniqueName: \"kubernetes.io/projected/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2-kube-api-access-9dmbd\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:45.108346 kubelet[1885]: I0913 00:49:45.108303 1885 scope.go:117] "RemoveContainer" containerID="03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9" Sep 13 00:49:45.109770 systemd[1]: Removed slice kubepods-besteffort-podcdd75caa_aa8b_47ed_9685_e07fa3d84d90.slice. Sep 13 00:49:45.112867 env[1193]: time="2025-09-13T00:49:45.112253728Z" level=info msg="RemoveContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\"" Sep 13 00:49:45.117513 env[1193]: time="2025-09-13T00:49:45.117191999Z" level=info msg="RemoveContainer for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" returns successfully" Sep 13 00:49:45.118128 kubelet[1885]: I0913 00:49:45.118086 1885 scope.go:117] "RemoveContainer" containerID="03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9" Sep 13 00:49:45.119447 env[1193]: time="2025-09-13T00:49:45.119336757Z" level=error msg="ContainerStatus for \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\": not found" Sep 13 00:49:45.119707 kubelet[1885]: E0913 00:49:45.119673 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\": not found" containerID="03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9" Sep 13 00:49:45.121961 kubelet[1885]: I0913 00:49:45.121835 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9"} err="failed to get container status \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"03c6cc52eb00337b4da4c762d64934e5cc9eca9a3e9124704a02b1fbde165eb9\": not found" Sep 13 00:49:45.123896 kubelet[1885]: I0913 00:49:45.123874 1885 scope.go:117] "RemoveContainer" containerID="86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce" Sep 13 00:49:45.129826 systemd[1]: Removed slice kubepods-burstable-pod3ca9ead0_c9c5_4a4f_b09c_fd481be229f2.slice. Sep 13 00:49:45.129927 systemd[1]: kubepods-burstable-pod3ca9ead0_c9c5_4a4f_b09c_fd481be229f2.slice: Consumed 7.886s CPU time. Sep 13 00:49:45.141197 env[1193]: time="2025-09-13T00:49:45.141156065Z" level=info msg="RemoveContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\"" Sep 13 00:49:45.144055 env[1193]: time="2025-09-13T00:49:45.143990663Z" level=info msg="RemoveContainer for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" returns successfully" Sep 13 00:49:45.144647 kubelet[1885]: I0913 00:49:45.144619 1885 scope.go:117] "RemoveContainer" containerID="2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70" Sep 13 00:49:45.145898 env[1193]: time="2025-09-13T00:49:45.145834922Z" level=info msg="RemoveContainer for \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\"" Sep 13 00:49:45.154648 env[1193]: time="2025-09-13T00:49:45.154583331Z" level=info msg="RemoveContainer for \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\" returns successfully" Sep 13 00:49:45.155199 kubelet[1885]: I0913 00:49:45.155165 1885 scope.go:117] "RemoveContainer" containerID="f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2" Sep 13 00:49:45.156767 env[1193]: time="2025-09-13T00:49:45.156732787Z" level=info msg="RemoveContainer for \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\"" Sep 13 00:49:45.159519 env[1193]: time="2025-09-13T00:49:45.159483092Z" level=info msg="RemoveContainer for \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\" returns successfully" Sep 13 00:49:45.159930 kubelet[1885]: I0913 00:49:45.159907 1885 scope.go:117] "RemoveContainer" containerID="92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba" Sep 13 00:49:45.161564 env[1193]: time="2025-09-13T00:49:45.161529086Z" level=info msg="RemoveContainer for \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\"" Sep 13 00:49:45.164242 env[1193]: time="2025-09-13T00:49:45.164197315Z" level=info msg="RemoveContainer for \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\" returns successfully" Sep 13 00:49:45.164931 kubelet[1885]: I0913 00:49:45.164832 1885 scope.go:117] "RemoveContainer" containerID="5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05" Sep 13 00:49:45.166731 env[1193]: time="2025-09-13T00:49:45.166700617Z" level=info msg="RemoveContainer for \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\"" Sep 13 00:49:45.170795 env[1193]: time="2025-09-13T00:49:45.170699277Z" level=info msg="RemoveContainer for \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\" returns successfully" Sep 13 00:49:45.171381 kubelet[1885]: I0913 00:49:45.171260 1885 scope.go:117] "RemoveContainer" containerID="86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce" Sep 13 00:49:45.171647 env[1193]: time="2025-09-13T00:49:45.171531684Z" level=error msg="ContainerStatus for \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\": not found" Sep 13 00:49:45.172183 kubelet[1885]: E0913 00:49:45.171905 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\": not found" containerID="86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce" Sep 13 00:49:45.172183 kubelet[1885]: I0913 00:49:45.171942 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce"} err="failed to get container status \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce\": not found" Sep 13 00:49:45.172183 kubelet[1885]: I0913 00:49:45.171996 1885 scope.go:117] "RemoveContainer" containerID="2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70" Sep 13 00:49:45.172519 env[1193]: time="2025-09-13T00:49:45.172295424Z" level=error msg="ContainerStatus for \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\": not found" Sep 13 00:49:45.172930 kubelet[1885]: E0913 00:49:45.172726 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\": not found" containerID="2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70" Sep 13 00:49:45.172930 kubelet[1885]: I0913 00:49:45.172785 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70"} err="failed to get container status \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\": rpc error: code = NotFound desc = an error occurred when try to find container \"2389404360d243e28508919224f6d21b84e1f842cfda9b9f7690e6280390af70\": not found" Sep 13 00:49:45.172930 kubelet[1885]: I0913 00:49:45.172810 1885 scope.go:117] "RemoveContainer" containerID="f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2" Sep 13 00:49:45.173190 env[1193]: time="2025-09-13T00:49:45.173066044Z" level=error msg="ContainerStatus for \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\": not found" Sep 13 00:49:45.173490 kubelet[1885]: E0913 00:49:45.173367 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\": not found" containerID="f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2" Sep 13 00:49:45.173490 kubelet[1885]: I0913 00:49:45.173395 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2"} err="failed to get container status \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6deedeee669a39bfdd356725d50bbc323d51f58bfe6573622d1db9b2836a9d2\": not found" Sep 13 00:49:45.173490 kubelet[1885]: I0913 00:49:45.173412 1885 scope.go:117] "RemoveContainer" containerID="92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba" Sep 13 00:49:45.174052 env[1193]: time="2025-09-13T00:49:45.174007355Z" level=error msg="ContainerStatus for \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\": not found" Sep 13 00:49:45.174344 kubelet[1885]: E0913 00:49:45.174239 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\": not found" containerID="92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba" Sep 13 00:49:45.174344 kubelet[1885]: I0913 00:49:45.174270 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba"} err="failed to get container status \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"92b47945b789476bf67b67bb0b1c0cda0510b9928a17e0bdf9fd584bf7ad19ba\": not found" Sep 13 00:49:45.174344 kubelet[1885]: I0913 00:49:45.174288 1885 scope.go:117] "RemoveContainer" containerID="5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05" Sep 13 00:49:45.174685 env[1193]: time="2025-09-13T00:49:45.174630235Z" level=error msg="ContainerStatus for \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\": not found" Sep 13 00:49:45.174846 kubelet[1885]: E0913 00:49:45.174811 1885 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\": not found" containerID="5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05" Sep 13 00:49:45.175016 kubelet[1885]: I0913 00:49:45.174833 1885 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05"} err="failed to get container status \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\": rpc error: code = NotFound desc = an error occurred when try to find container \"5dd7dd4f59e89d4ab0af8955656efa71a24a35bce8818607222e11e62ee7ab05\": not found" Sep 13 00:49:45.649956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86f27e774290c91070a9eb1500e057a29ef590b3f3916f662c4819a0100658ce-rootfs.mount: Deactivated successfully. Sep 13 00:49:45.650082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3-rootfs.mount: Deactivated successfully. Sep 13 00:49:45.650145 systemd[1]: var-lib-kubelet-pods-cdd75caa\x2daa8b\x2d47ed\x2d9685\x2de07fa3d84d90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9kq28.mount: Deactivated successfully. Sep 13 00:49:45.650206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8-rootfs.mount: Deactivated successfully. Sep 13 00:49:45.650264 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8-shm.mount: Deactivated successfully. Sep 13 00:49:45.650322 systemd[1]: var-lib-kubelet-pods-3ca9ead0\x2dc9c5\x2d4a4f\x2db09c\x2dfd481be229f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9dmbd.mount: Deactivated successfully. Sep 13 00:49:45.650380 systemd[1]: var-lib-kubelet-pods-3ca9ead0\x2dc9c5\x2d4a4f\x2db09c\x2dfd481be229f2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:49:45.650438 systemd[1]: var-lib-kubelet-pods-3ca9ead0\x2dc9c5\x2d4a4f\x2db09c\x2dfd481be229f2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:45.769370 kubelet[1885]: I0913 00:49:45.769324 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" path="/var/lib/kubelet/pods/3ca9ead0-c9c5-4a4f-b09c-fd481be229f2/volumes" Sep 13 00:49:45.770510 kubelet[1885]: I0913 00:49:45.770478 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd75caa-aa8b-47ed-9685-e07fa3d84d90" path="/var/lib/kubelet/pods/cdd75caa-aa8b-47ed-9685-e07fa3d84d90/volumes" Sep 13 00:49:46.585701 sshd[3494]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:46.591418 systemd[1]: sshd@25-146.190.148.102:22-147.75.109.163:35772.service: Deactivated successfully. Sep 13 00:49:46.593011 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:49:46.594268 systemd-logind[1182]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:49:46.597185 systemd[1]: Started sshd@26-146.190.148.102:22-147.75.109.163:35776.service. Sep 13 00:49:46.599599 systemd-logind[1182]: Removed session 26. Sep 13 00:49:46.654066 sshd[3660]: Accepted publickey for core from 147.75.109.163 port 35776 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:46.655785 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:46.661055 systemd-logind[1182]: New session 27 of user core. Sep 13 00:49:46.661730 systemd[1]: Started session-27.scope. Sep 13 00:49:47.196809 sshd[3660]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:47.202967 systemd[1]: sshd@26-146.190.148.102:22-147.75.109.163:35776.service: Deactivated successfully. Sep 13 00:49:47.204066 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 00:49:47.204781 systemd-logind[1182]: Session 27 logged out. Waiting for processes to exit. Sep 13 00:49:47.211202 systemd[1]: Started sshd@27-146.190.148.102:22-147.75.109.163:35784.service. Sep 13 00:49:47.214171 systemd-logind[1182]: Removed session 27. Sep 13 00:49:47.251583 kubelet[1885]: I0913 00:49:47.251546 1885 memory_manager.go:355] "RemoveStaleState removing state" podUID="3ca9ead0-c9c5-4a4f-b09c-fd481be229f2" containerName="cilium-agent" Sep 13 00:49:47.252043 kubelet[1885]: I0913 00:49:47.252024 1885 memory_manager.go:355] "RemoveStaleState removing state" podUID="cdd75caa-aa8b-47ed-9685-e07fa3d84d90" containerName="cilium-operator" Sep 13 00:49:47.267385 systemd[1]: Created slice kubepods-burstable-pod230e6783_badc_4663_9e6a_d3bc0b1c750a.slice. Sep 13 00:49:47.270739 sshd[3672]: Accepted publickey for core from 147.75.109.163 port 35784 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:47.273797 sshd[3672]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:47.278470 systemd-logind[1182]: New session 28 of user core. Sep 13 00:49:47.279969 systemd[1]: Started session-28.scope. Sep 13 00:49:47.404299 kubelet[1885]: I0913 00:49:47.404226 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-clustermesh-secrets\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.404668 kubelet[1885]: I0913 00:49:47.404625 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-config-path\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.404892 kubelet[1885]: I0913 00:49:47.404829 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-run\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405048 kubelet[1885]: I0913 00:49:47.405025 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-bpf-maps\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405181 kubelet[1885]: I0913 00:49:47.405161 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-cgroup\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405470 kubelet[1885]: I0913 00:49:47.405443 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnw4b\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-kube-api-access-xnw4b\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405628 kubelet[1885]: I0913 00:49:47.405610 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-ipsec-secrets\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405761 kubelet[1885]: I0913 00:49:47.405737 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-etc-cni-netd\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.405927 kubelet[1885]: I0913 00:49:47.405907 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-hostproc\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406069 kubelet[1885]: I0913 00:49:47.406047 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-kernel\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406231 kubelet[1885]: I0913 00:49:47.406210 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cni-path\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406367 kubelet[1885]: I0913 00:49:47.406347 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-xtables-lock\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406521 kubelet[1885]: I0913 00:49:47.406502 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-lib-modules\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406753 kubelet[1885]: I0913 00:49:47.406729 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-net\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.406953 kubelet[1885]: I0913 00:49:47.406931 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-hubble-tls\") pod \"cilium-qx2mr\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " pod="kube-system/cilium-qx2mr" Sep 13 00:49:47.487376 sshd[3672]: pam_unix(sshd:session): session closed for user core Sep 13 00:49:47.495008 systemd[1]: Started sshd@28-146.190.148.102:22-147.75.109.163:35786.service. Sep 13 00:49:47.495706 systemd[1]: sshd@27-146.190.148.102:22-147.75.109.163:35784.service: Deactivated successfully. Sep 13 00:49:47.496666 systemd[1]: session-28.scope: Deactivated successfully. Sep 13 00:49:47.498542 systemd-logind[1182]: Session 28 logged out. Waiting for processes to exit. Sep 13 00:49:47.500354 systemd-logind[1182]: Removed session 28. Sep 13 00:49:47.528632 kubelet[1885]: E0913 00:49:47.527227 1885 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets clustermesh-secrets etc-cni-netd hostproc hubble-tls kube-api-access-xnw4b], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-qx2mr" podUID="230e6783-badc-4663-9e6a-d3bc0b1c750a" Sep 13 00:49:47.562424 sshd[3682]: Accepted publickey for core from 147.75.109.163 port 35786 ssh2: RSA SHA256:Z+gHFjZa6FbNcZ3OMDgtPyMdExX9gV+gkyGg/y2DokA Sep 13 00:49:47.564283 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:49:47.570109 systemd-logind[1182]: New session 29 of user core. Sep 13 00:49:47.570798 systemd[1]: Started session-29.scope. Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213037 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnw4b\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-kube-api-access-xnw4b\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213100 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-net\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213124 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-run\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213142 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-cgroup\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213172 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cni-path\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221107 kubelet[1885]: I0913 00:49:48.213192 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-ipsec-secrets\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213210 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-etc-cni-netd\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213231 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-kernel\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213272 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-config-path\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213294 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-bpf-maps\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213311 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-xtables-lock\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.221433 kubelet[1885]: I0913 00:49:48.213341 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-hubble-tls\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.220310 systemd[1]: var-lib-kubelet-pods-230e6783\x2dbadc\x2d4663\x2d9e6a\x2dd3bc0b1c750a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxnw4b.mount: Deactivated successfully. Sep 13 00:49:48.224955 kubelet[1885]: I0913 00:49:48.213355 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-lib-modules\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.224955 kubelet[1885]: I0913 00:49:48.213370 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-hostproc\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.224955 kubelet[1885]: I0913 00:49:48.213388 1885 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-clustermesh-secrets\") pod \"230e6783-badc-4663-9e6a-d3bc0b1c750a\" (UID: \"230e6783-badc-4663-9e6a-d3bc0b1c750a\") " Sep 13 00:49:48.224955 kubelet[1885]: I0913 00:49:48.214052 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.224955 kubelet[1885]: I0913 00:49:48.214104 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.224053 systemd[1]: var-lib-kubelet-pods-230e6783\x2dbadc\x2d4663\x2d9e6a\x2dd3bc0b1c750a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:48.225344 kubelet[1885]: I0913 00:49:48.214123 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225344 kubelet[1885]: I0913 00:49:48.214139 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225344 kubelet[1885]: I0913 00:49:48.214154 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cni-path" (OuterVolumeSpecName: "cni-path") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225344 kubelet[1885]: I0913 00:49:48.216194 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 00:49:48.225344 kubelet[1885]: I0913 00:49:48.216247 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225538 kubelet[1885]: I0913 00:49:48.216267 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225538 kubelet[1885]: I0913 00:49:48.218659 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225538 kubelet[1885]: I0913 00:49:48.218716 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225538 kubelet[1885]: I0913 00:49:48.218738 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-hostproc" (OuterVolumeSpecName: "hostproc") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 00:49:48.225538 kubelet[1885]: I0913 00:49:48.222994 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:48.226310 kubelet[1885]: I0913 00:49:48.226267 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-kube-api-access-xnw4b" (OuterVolumeSpecName: "kube-api-access-xnw4b") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "kube-api-access-xnw4b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:48.226566 kubelet[1885]: I0913 00:49:48.226542 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 00:49:48.229127 kubelet[1885]: I0913 00:49:48.229083 1885 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "230e6783-badc-4663-9e6a-d3bc0b1c750a" (UID: "230e6783-badc-4663-9e6a-d3bc0b1c750a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 00:49:48.314431 kubelet[1885]: I0913 00:49:48.314384 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315011 kubelet[1885]: I0913 00:49:48.314982 1885 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-etc-cni-netd\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315110 kubelet[1885]: I0913 00:49:48.315097 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315192 kubelet[1885]: I0913 00:49:48.315180 1885 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cni-path\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315257 kubelet[1885]: I0913 00:49:48.315245 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-config-path\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315320 kubelet[1885]: I0913 00:49:48.315306 1885 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-bpf-maps\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315413 kubelet[1885]: I0913 00:49:48.315396 1885 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-xtables-lock\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315491 kubelet[1885]: I0913 00:49:48.315480 1885 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-hubble-tls\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315552 kubelet[1885]: I0913 00:49:48.315539 1885 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-lib-modules\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315617 kubelet[1885]: I0913 00:49:48.315606 1885 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-hostproc\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315678 kubelet[1885]: I0913 00:49:48.315668 1885 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/230e6783-badc-4663-9e6a-d3bc0b1c750a-clustermesh-secrets\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315762 kubelet[1885]: I0913 00:49:48.315747 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-run\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315849 kubelet[1885]: I0913 00:49:48.315834 1885 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-cilium-cgroup\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.315971 kubelet[1885]: I0913 00:49:48.315955 1885 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnw4b\" (UniqueName: \"kubernetes.io/projected/230e6783-badc-4663-9e6a-d3bc0b1c750a-kube-api-access-xnw4b\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.316048 kubelet[1885]: I0913 00:49:48.316036 1885 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/230e6783-badc-4663-9e6a-d3bc0b1c750a-host-proc-sys-net\") on node \"ci-3510.3.8-n-17df7d76e4\" DevicePath \"\"" Sep 13 00:49:48.526937 systemd[1]: var-lib-kubelet-pods-230e6783\x2dbadc\x2d4663\x2d9e6a\x2dd3bc0b1c750a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 13 00:49:48.527059 systemd[1]: var-lib-kubelet-pods-230e6783\x2dbadc\x2d4663\x2d9e6a\x2dd3bc0b1c750a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:49:48.908237 kubelet[1885]: E0913 00:49:48.908092 1885 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:49:49.137752 systemd[1]: Removed slice kubepods-burstable-pod230e6783_badc_4663_9e6a_d3bc0b1c750a.slice. Sep 13 00:49:49.220033 systemd[1]: Created slice kubepods-burstable-pod9eb2d34d_0344_45a0_a1f9_b9ff5921fdc9.slice. Sep 13 00:49:49.324563 kubelet[1885]: I0913 00:49:49.324495 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-cilium-ipsec-secrets\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.324563 kubelet[1885]: I0913 00:49:49.324565 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-cilium-config-path\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324587 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-cilium-run\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324606 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-bpf-maps\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324629 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-xtables-lock\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324648 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ttv\" (UniqueName: \"kubernetes.io/projected/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-kube-api-access-24ttv\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324673 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-lib-modules\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325032 kubelet[1885]: I0913 00:49:49.324692 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-hubble-tls\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324713 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-etc-cni-netd\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324732 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-clustermesh-secrets\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324753 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-host-proc-sys-kernel\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324772 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-hostproc\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324792 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-cilium-cgroup\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325202 kubelet[1885]: I0913 00:49:49.324813 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-cni-path\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.325362 kubelet[1885]: I0913 00:49:49.324833 1885 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9-host-proc-sys-net\") pod \"cilium-z2dtt\" (UID: \"9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9\") " pod="kube-system/cilium-z2dtt" Sep 13 00:49:49.531649 kubelet[1885]: E0913 00:49:49.531489 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:49.534291 env[1193]: time="2025-09-13T00:49:49.532337228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z2dtt,Uid:9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9,Namespace:kube-system,Attempt:0,}" Sep 13 00:49:49.562657 env[1193]: time="2025-09-13T00:49:49.562540383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:49:49.563008 env[1193]: time="2025-09-13T00:49:49.562609123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:49:49.563008 env[1193]: time="2025-09-13T00:49:49.562623573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:49:49.563395 env[1193]: time="2025-09-13T00:49:49.563329429Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73 pid=3713 runtime=io.containerd.runc.v2 Sep 13 00:49:49.590368 systemd[1]: Started cri-containerd-47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73.scope. Sep 13 00:49:49.622562 env[1193]: time="2025-09-13T00:49:49.622496601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z2dtt,Uid:9eb2d34d-0344-45a0-a1f9-b9ff5921fdc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\"" Sep 13 00:49:49.623373 kubelet[1885]: E0913 00:49:49.623339 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:49.630650 env[1193]: time="2025-09-13T00:49:49.630579518Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:49:49.648641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2997678708.mount: Deactivated successfully. Sep 13 00:49:49.650517 env[1193]: time="2025-09-13T00:49:49.650101190Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa\"" Sep 13 00:49:49.652955 env[1193]: time="2025-09-13T00:49:49.652901354Z" level=info msg="StartContainer for \"311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa\"" Sep 13 00:49:49.676425 systemd[1]: Started cri-containerd-311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa.scope. Sep 13 00:49:49.720692 env[1193]: time="2025-09-13T00:49:49.720638127Z" level=info msg="StartContainer for \"311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa\" returns successfully" Sep 13 00:49:49.739607 systemd[1]: cri-containerd-311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa.scope: Deactivated successfully. Sep 13 00:49:49.770479 kubelet[1885]: I0913 00:49:49.770426 1885 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="230e6783-badc-4663-9e6a-d3bc0b1c750a" path="/var/lib/kubelet/pods/230e6783-badc-4663-9e6a-d3bc0b1c750a/volumes" Sep 13 00:49:49.772315 env[1193]: time="2025-09-13T00:49:49.772254170Z" level=info msg="shim disconnected" id=311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa Sep 13 00:49:49.772315 env[1193]: time="2025-09-13T00:49:49.772311011Z" level=warning msg="cleaning up after shim disconnected" id=311f2525e40712cda293a37a3e73bbd0a9935e806aac5b5bf590ecfbfd419eaa namespace=k8s.io Sep 13 00:49:49.772644 env[1193]: time="2025-09-13T00:49:49.772321677Z" level=info msg="cleaning up dead shim" Sep 13 00:49:49.784335 env[1193]: time="2025-09-13T00:49:49.784191492Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Sep 13 00:49:50.139097 kubelet[1885]: E0913 00:49:50.138991 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:50.141743 env[1193]: time="2025-09-13T00:49:50.141701125Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:49:50.154193 env[1193]: time="2025-09-13T00:49:50.154107850Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3\"" Sep 13 00:49:50.154910 env[1193]: time="2025-09-13T00:49:50.154841783Z" level=info msg="StartContainer for \"8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3\"" Sep 13 00:49:50.185983 systemd[1]: Started cri-containerd-8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3.scope. Sep 13 00:49:50.250704 env[1193]: time="2025-09-13T00:49:50.250633950Z" level=info msg="StartContainer for \"8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3\" returns successfully" Sep 13 00:49:50.267028 systemd[1]: cri-containerd-8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3.scope: Deactivated successfully. Sep 13 00:49:50.296803 env[1193]: time="2025-09-13T00:49:50.296717145Z" level=info msg="shim disconnected" id=8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3 Sep 13 00:49:50.296803 env[1193]: time="2025-09-13T00:49:50.296790266Z" level=warning msg="cleaning up after shim disconnected" id=8eb57c36f9ff341530f154f27d627a8312d4b51c284e7e96ad468f69947b54b3 namespace=k8s.io Sep 13 00:49:50.296803 env[1193]: time="2025-09-13T00:49:50.296807297Z" level=info msg="cleaning up dead shim" Sep 13 00:49:50.311506 env[1193]: time="2025-09-13T00:49:50.311450984Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3861 runtime=io.containerd.runc.v2\n" Sep 13 00:49:51.141878 kubelet[1885]: E0913 00:49:51.141814 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:51.144057 env[1193]: time="2025-09-13T00:49:51.144007262Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:49:51.159960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920255575.mount: Deactivated successfully. Sep 13 00:49:51.166279 env[1193]: time="2025-09-13T00:49:51.166222533Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72\"" Sep 13 00:49:51.172884 env[1193]: time="2025-09-13T00:49:51.171186552Z" level=info msg="StartContainer for \"c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72\"" Sep 13 00:49:51.199294 systemd[1]: Started cri-containerd-c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72.scope. Sep 13 00:49:51.246556 env[1193]: time="2025-09-13T00:49:51.246485043Z" level=info msg="StartContainer for \"c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72\" returns successfully" Sep 13 00:49:51.255887 systemd[1]: cri-containerd-c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72.scope: Deactivated successfully. Sep 13 00:49:51.283222 env[1193]: time="2025-09-13T00:49:51.283174904Z" level=info msg="shim disconnected" id=c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72 Sep 13 00:49:51.283622 env[1193]: time="2025-09-13T00:49:51.283586004Z" level=warning msg="cleaning up after shim disconnected" id=c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72 namespace=k8s.io Sep 13 00:49:51.283763 env[1193]: time="2025-09-13T00:49:51.283737508Z" level=info msg="cleaning up dead shim" Sep 13 00:49:51.292922 env[1193]: time="2025-09-13T00:49:51.292821410Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3918 runtime=io.containerd.runc.v2\n" Sep 13 00:49:51.546475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6c3a55fccb0b9256cadda6c32deba330573232b05680dd04d5cc6904b8e1c72-rootfs.mount: Deactivated successfully. Sep 13 00:49:52.145843 kubelet[1885]: E0913 00:49:52.145792 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:52.147983 env[1193]: time="2025-09-13T00:49:52.147944135Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:49:52.162796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047610922.mount: Deactivated successfully. Sep 13 00:49:52.165361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228599559.mount: Deactivated successfully. Sep 13 00:49:52.172622 env[1193]: time="2025-09-13T00:49:52.172569602Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147\"" Sep 13 00:49:52.174611 env[1193]: time="2025-09-13T00:49:52.174567330Z" level=info msg="StartContainer for \"be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147\"" Sep 13 00:49:52.206653 systemd[1]: Started cri-containerd-be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147.scope. Sep 13 00:49:52.242567 systemd[1]: cri-containerd-be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147.scope: Deactivated successfully. Sep 13 00:49:52.247137 env[1193]: time="2025-09-13T00:49:52.247078634Z" level=info msg="StartContainer for \"be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147\" returns successfully" Sep 13 00:49:52.247541 env[1193]: time="2025-09-13T00:49:52.244531997Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb2d34d_0344_45a0_a1f9_b9ff5921fdc9.slice/cri-containerd-be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147.scope/memory.events\": no such file or directory" Sep 13 00:49:52.272088 env[1193]: time="2025-09-13T00:49:52.272036980Z" level=info msg="shim disconnected" id=be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147 Sep 13 00:49:52.272088 env[1193]: time="2025-09-13T00:49:52.272081011Z" level=warning msg="cleaning up after shim disconnected" id=be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147 namespace=k8s.io Sep 13 00:49:52.272088 env[1193]: time="2025-09-13T00:49:52.272091154Z" level=info msg="cleaning up dead shim" Sep 13 00:49:52.282020 env[1193]: time="2025-09-13T00:49:52.281963706Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:49:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3976 runtime=io.containerd.runc.v2\n" Sep 13 00:49:52.546983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be0d36d9e0dea5fe9a5197ff1b2fe6301c745a16b80f7e4c58641eebabbd6147-rootfs.mount: Deactivated successfully. Sep 13 00:49:53.151349 kubelet[1885]: E0913 00:49:53.150124 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:53.159055 env[1193]: time="2025-09-13T00:49:53.158994188Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:49:53.171743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450382725.mount: Deactivated successfully. Sep 13 00:49:53.179370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4128865997.mount: Deactivated successfully. Sep 13 00:49:53.184447 env[1193]: time="2025-09-13T00:49:53.184401197Z" level=info msg="CreateContainer within sandbox \"47696ee30ae8a3864cb57178a2aecf76d28be3c3a2db1715fc504edc91ec0f73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae\"" Sep 13 00:49:53.185330 env[1193]: time="2025-09-13T00:49:53.185297849Z" level=info msg="StartContainer for \"14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae\"" Sep 13 00:49:53.208798 systemd[1]: Started cri-containerd-14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae.scope. Sep 13 00:49:53.246473 env[1193]: time="2025-09-13T00:49:53.246347869Z" level=info msg="StartContainer for \"14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae\" returns successfully" Sep 13 00:49:53.747402 env[1193]: time="2025-09-13T00:49:53.747352911Z" level=info msg="StopPodSandbox for \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\"" Sep 13 00:49:53.747561 env[1193]: time="2025-09-13T00:49:53.747448395Z" level=info msg="TearDown network for sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" successfully" Sep 13 00:49:53.747561 env[1193]: time="2025-09-13T00:49:53.747483459Z" level=info msg="StopPodSandbox for \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" returns successfully" Sep 13 00:49:53.747826 env[1193]: time="2025-09-13T00:49:53.747797490Z" level=info msg="RemovePodSandbox for \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\"" Sep 13 00:49:53.748017 env[1193]: time="2025-09-13T00:49:53.747826499Z" level=info msg="Forcibly stopping sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\"" Sep 13 00:49:53.748017 env[1193]: time="2025-09-13T00:49:53.747961922Z" level=info msg="TearDown network for sandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" successfully" Sep 13 00:49:53.750746 env[1193]: time="2025-09-13T00:49:53.750692373Z" level=info msg="RemovePodSandbox \"8d1c67c9d30ab6995e9f539b1a5cfecfb836ff8825e74999c1e4229b36a923d8\" returns successfully" Sep 13 00:49:53.751199 env[1193]: time="2025-09-13T00:49:53.751168265Z" level=info msg="StopPodSandbox for \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\"" Sep 13 00:49:53.751317 env[1193]: time="2025-09-13T00:49:53.751273430Z" level=info msg="TearDown network for sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" successfully" Sep 13 00:49:53.751317 env[1193]: time="2025-09-13T00:49:53.751314118Z" level=info msg="StopPodSandbox for \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" returns successfully" Sep 13 00:49:53.751653 env[1193]: time="2025-09-13T00:49:53.751630516Z" level=info msg="RemovePodSandbox for \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\"" Sep 13 00:49:53.751712 env[1193]: time="2025-09-13T00:49:53.751655315Z" level=info msg="Forcibly stopping sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\"" Sep 13 00:49:53.751745 env[1193]: time="2025-09-13T00:49:53.751727807Z" level=info msg="TearDown network for sandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" successfully" Sep 13 00:49:53.756021 env[1193]: time="2025-09-13T00:49:53.755966105Z" level=info msg="RemovePodSandbox \"73e5bfb36e236e7bcfab5487faf34d8b9a82b80875fead07dddb93427b7846d3\" returns successfully" Sep 13 00:49:53.760884 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 00:49:54.155813 kubelet[1885]: E0913 00:49:54.155690 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:54.176016 kubelet[1885]: I0913 00:49:54.175765 1885 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z2dtt" podStartSLOduration=5.175747397 podStartE2EDuration="5.175747397s" podCreationTimestamp="2025-09-13 00:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:49:54.174669294 +0000 UTC m=+120.612064094" watchObservedRunningTime="2025-09-13 00:49:54.175747397 +0000 UTC m=+120.613142197" Sep 13 00:49:55.533913 kubelet[1885]: E0913 00:49:55.533833 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:55.998327 systemd[1]: run-containerd-runc-k8s.io-14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae-runc.ivlUz6.mount: Deactivated successfully. Sep 13 00:49:57.068146 systemd-networkd[1003]: lxc_health: Link UP Sep 13 00:49:57.078932 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 13 00:49:57.078943 systemd-networkd[1003]: lxc_health: Gained carrier Sep 13 00:49:57.534322 kubelet[1885]: E0913 00:49:57.534275 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:58.168964 kubelet[1885]: E0913 00:49:58.168916 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:49:58.182789 systemd[1]: run-containerd-runc-k8s.io-14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae-runc.iik7oZ.mount: Deactivated successfully. Sep 13 00:49:59.144167 systemd-networkd[1003]: lxc_health: Gained IPv6LL Sep 13 00:49:59.170718 kubelet[1885]: E0913 00:49:59.170666 1885 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:50:00.406235 systemd[1]: run-containerd-runc-k8s.io-14db6e24453646da1d9761544c89de76ab1257de36c93bfd7d117c93b24047ae-runc.mee8Xu.mount: Deactivated successfully. Sep 13 00:50:02.809187 sshd[3682]: pam_unix(sshd:session): session closed for user core Sep 13 00:50:02.815341 systemd[1]: sshd@28-146.190.148.102:22-147.75.109.163:35786.service: Deactivated successfully. Sep 13 00:50:02.816397 systemd[1]: session-29.scope: Deactivated successfully. Sep 13 00:50:02.817507 systemd-logind[1182]: Session 29 logged out. Waiting for processes to exit. Sep 13 00:50:02.822343 systemd-logind[1182]: Removed session 29.