Nov 1 00:38:13.062280 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:38:13.062322 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:38:13.062341 kernel: BIOS-provided physical RAM map: Nov 1 00:38:13.062353 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:38:13.062363 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:38:13.062374 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:38:13.062396 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:38:13.062407 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:38:13.062421 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:38:13.062432 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:38:13.062451 kernel: NX (Execute Disable) protection: active Nov 1 00:38:13.062462 kernel: SMBIOS 2.8 present. Nov 1 00:38:13.062474 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:38:13.062485 kernel: Hypervisor detected: KVM Nov 1 00:38:13.062500 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:38:13.062515 kernel: kvm-clock: cpu 0, msr 771a0001, primary cpu clock Nov 1 00:38:13.062528 kernel: kvm-clock: using sched offset of 4251542960 cycles Nov 1 00:38:13.062541 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:38:13.062559 kernel: tsc: Detected 1995.304 MHz processor Nov 1 00:38:13.062572 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:38:13.062585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:38:13.062597 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:38:13.062610 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:38:13.062626 kernel: ACPI: Early table checksum verification disabled Nov 1 00:38:13.062638 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:38:13.062651 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.062664 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.062677 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.062689 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:38:13.062718 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.062730 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.067814 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.067838 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:38:13.067852 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:38:13.067867 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:38:13.067879 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:38:13.067893 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:38:13.067905 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:38:13.067918 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:38:13.067932 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:38:13.067953 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:38:13.067967 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:38:13.067980 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:38:13.067994 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:38:13.068008 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:38:13.068022 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:38:13.068039 kernel: Zone ranges: Nov 1 00:38:13.068053 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:38:13.068066 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:38:13.068080 kernel: Normal empty Nov 1 00:38:13.068093 kernel: Movable zone start for each node Nov 1 00:38:13.068106 kernel: Early memory node ranges Nov 1 00:38:13.068127 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:38:13.068140 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:38:13.068154 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:38:13.068171 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:38:13.068195 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:38:13.068209 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:38:13.068222 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:38:13.068237 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:38:13.068250 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:38:13.068270 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:38:13.068284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:38:13.068298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:38:13.068315 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:38:13.068335 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:38:13.068348 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:38:13.068374 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:38:13.068388 kernel: TSC deadline timer available Nov 1 00:38:13.068401 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:38:13.068415 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:38:13.068429 kernel: Booting paravirtualized kernel on KVM Nov 1 00:38:13.068443 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:38:13.068461 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:38:13.068475 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:38:13.068488 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:38:13.068502 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:38:13.068515 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Nov 1 00:38:13.068529 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:38:13.068543 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:38:13.068556 kernel: Policy zone: DMA32 Nov 1 00:38:13.068571 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:38:13.068589 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:38:13.068602 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:38:13.068616 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:38:13.068630 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:38:13.068651 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 123076K reserved, 0K cma-reserved) Nov 1 00:38:13.068664 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:38:13.068678 kernel: Kernel/User page tables isolation: enabled Nov 1 00:38:13.068710 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:38:13.068727 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:38:13.068740 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:38:13.068755 kernel: rcu: RCU event tracing is enabled. Nov 1 00:38:13.068768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:38:13.068782 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:38:13.068803 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:38:13.068816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:38:13.068830 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:38:13.068862 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:38:13.068885 kernel: random: crng init done Nov 1 00:38:13.068898 kernel: Console: colour VGA+ 80x25 Nov 1 00:38:13.068912 kernel: printk: console [tty0] enabled Nov 1 00:38:13.068926 kernel: printk: console [ttyS0] enabled Nov 1 00:38:13.068940 kernel: ACPI: Core revision 20210730 Nov 1 00:38:13.068953 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:38:13.068979 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:38:13.068992 kernel: x2apic enabled Nov 1 00:38:13.069012 kernel: Switched APIC routing to physical x2apic. Nov 1 00:38:13.069026 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:38:13.069042 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985b48a746, max_idle_ns: 881590510383 ns Nov 1 00:38:13.069056 kernel: Calibrating delay loop (skipped) preset value.. 3990.60 BogoMIPS (lpj=1995304) Nov 1 00:38:13.069079 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:38:13.069093 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:38:13.069107 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:38:13.069120 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:38:13.069134 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:38:13.069147 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:38:13.069164 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:38:13.069189 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:38:13.069203 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:38:13.069220 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:38:13.069234 kernel: active return thunk: its_return_thunk Nov 1 00:38:13.069248 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:38:13.069262 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:38:13.069276 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:38:13.069291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:38:13.069305 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:38:13.069322 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:38:13.069337 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:38:13.069351 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:38:13.069365 kernel: LSM: Security Framework initializing Nov 1 00:38:13.069379 kernel: SELinux: Initializing. Nov 1 00:38:13.069393 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:38:13.069408 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:38:13.069443 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:38:13.069458 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:38:13.069472 kernel: signal: max sigframe size: 1776 Nov 1 00:38:13.069486 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:38:13.069500 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:38:13.069515 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:38:13.069529 kernel: x86: Booting SMP configuration: Nov 1 00:38:13.069543 kernel: .... node #0, CPUs: #1 Nov 1 00:38:13.069557 kernel: kvm-clock: cpu 1, msr 771a0041, secondary cpu clock Nov 1 00:38:13.069574 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Nov 1 00:38:13.069588 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:38:13.069602 kernel: smpboot: Max logical packages: 1 Nov 1 00:38:13.069623 kernel: smpboot: Total of 2 processors activated (7981.21 BogoMIPS) Nov 1 00:38:13.069638 kernel: devtmpfs: initialized Nov 1 00:38:13.069652 kernel: x86/mm: Memory block size: 128MB Nov 1 00:38:13.069666 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:38:13.069687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:38:13.069714 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:38:13.069731 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:38:13.069745 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:38:13.069760 kernel: audit: type=2000 audit(1761957492.023:1): state=initialized audit_enabled=0 res=1 Nov 1 00:38:13.069774 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:38:13.069788 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:38:13.069802 kernel: cpuidle: using governor menu Nov 1 00:38:13.069821 kernel: ACPI: bus type PCI registered Nov 1 00:38:13.069836 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:38:13.069851 kernel: dca service started, version 1.12.1 Nov 1 00:38:13.069868 kernel: PCI: Using configuration type 1 for base access Nov 1 00:38:13.069883 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:38:13.069897 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:38:13.069912 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:38:13.069925 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:38:13.069940 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:38:13.069954 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:38:13.069968 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:38:13.069982 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:38:13.070000 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:38:13.070014 kernel: ACPI: Interpreter enabled Nov 1 00:38:13.070028 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:38:13.070042 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:38:13.070057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:38:13.070072 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:38:13.070086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:38:13.070408 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:38:13.070606 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:38:13.070628 kernel: acpiphp: Slot [3] registered Nov 1 00:38:13.070665 kernel: acpiphp: Slot [4] registered Nov 1 00:38:13.070682 kernel: acpiphp: Slot [5] registered Nov 1 00:38:13.070712 kernel: acpiphp: Slot [6] registered Nov 1 00:38:13.070745 kernel: acpiphp: Slot [7] registered Nov 1 00:38:13.070760 kernel: acpiphp: Slot [8] registered Nov 1 00:38:13.070784 kernel: acpiphp: Slot [9] registered Nov 1 00:38:13.070801 kernel: acpiphp: Slot [10] registered Nov 1 00:38:13.070823 kernel: acpiphp: Slot [11] registered Nov 1 00:38:13.070839 kernel: acpiphp: Slot [12] registered Nov 1 00:38:13.070862 kernel: acpiphp: Slot [13] registered Nov 1 00:38:13.070878 kernel: acpiphp: Slot [14] registered Nov 1 00:38:13.070892 kernel: acpiphp: Slot [15] registered Nov 1 00:38:13.070922 kernel: acpiphp: Slot [16] registered Nov 1 00:38:13.070937 kernel: acpiphp: Slot [17] registered Nov 1 00:38:13.070951 kernel: acpiphp: Slot [18] registered Nov 1 00:38:13.070974 kernel: acpiphp: Slot [19] registered Nov 1 00:38:13.070993 kernel: acpiphp: Slot [20] registered Nov 1 00:38:13.071008 kernel: acpiphp: Slot [21] registered Nov 1 00:38:13.071022 kernel: acpiphp: Slot [22] registered Nov 1 00:38:13.071036 kernel: acpiphp: Slot [23] registered Nov 1 00:38:13.071057 kernel: acpiphp: Slot [24] registered Nov 1 00:38:13.071071 kernel: acpiphp: Slot [25] registered Nov 1 00:38:13.071086 kernel: acpiphp: Slot [26] registered Nov 1 00:38:13.071110 kernel: acpiphp: Slot [27] registered Nov 1 00:38:13.071124 kernel: acpiphp: Slot [28] registered Nov 1 00:38:13.071138 kernel: acpiphp: Slot [29] registered Nov 1 00:38:13.071156 kernel: acpiphp: Slot [30] registered Nov 1 00:38:13.071170 kernel: acpiphp: Slot [31] registered Nov 1 00:38:13.071191 kernel: PCI host bridge to bus 0000:00 Nov 1 00:38:13.071404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:38:13.071583 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:38:13.071763 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:38:13.071937 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:38:13.072112 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:38:13.072272 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:38:13.072537 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:38:13.073857 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:38:13.074052 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:38:13.074194 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:38:13.074353 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:38:13.074495 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:38:13.074635 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:38:13.074796 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:38:13.074981 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:38:13.075121 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:38:13.075274 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:38:13.075419 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:38:13.075557 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:38:13.080892 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:38:13.081092 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:38:13.081242 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:38:13.081390 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:38:13.081527 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:38:13.081681 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:38:13.081867 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:38:13.082008 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:38:13.082145 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:38:13.082323 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:38:13.082508 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:38:13.082660 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:38:13.082907 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:38:13.083050 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:38:13.083214 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:38:13.083354 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:38:13.083500 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:38:13.083645 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:38:13.083839 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:38:13.083986 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:38:13.084139 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:38:13.084278 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:38:13.084459 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:38:13.084597 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:38:13.084752 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:38:13.084901 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:38:13.085083 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:38:13.085224 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:38:13.085363 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:38:13.085381 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:38:13.085396 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:38:13.085410 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:38:13.085429 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:38:13.085443 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:38:13.085459 kernel: iommu: Default domain type: Translated Nov 1 00:38:13.085473 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:38:13.085610 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:38:13.085781 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:38:13.085958 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:38:13.085987 kernel: vgaarb: loaded Nov 1 00:38:13.086003 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:38:13.086022 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:38:13.086037 kernel: PTP clock support registered Nov 1 00:38:13.086051 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:38:13.086066 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:38:13.086081 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:38:13.086095 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:38:13.086106 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:38:13.086119 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:38:13.086131 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:38:13.086149 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:38:13.086164 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:38:13.086179 kernel: pnp: PnP ACPI init Nov 1 00:38:13.086193 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:38:13.086208 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:38:13.086222 kernel: NET: Registered PF_INET protocol family Nov 1 00:38:13.086237 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:38:13.086252 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:38:13.086269 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:38:13.086291 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:38:13.086305 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:38:13.086319 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:38:13.086334 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:38:13.086349 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:38:13.086364 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:38:13.086378 kernel: NET: Registered PF_XDP protocol family Nov 1 00:38:13.086522 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:38:13.086654 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:38:13.093898 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:38:13.094048 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:38:13.094171 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:38:13.094331 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:38:13.094477 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:38:13.094623 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:38:13.094650 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:38:13.094824 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 39392 usecs Nov 1 00:38:13.094843 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:38:13.094858 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:38:13.094873 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985b48a746, max_idle_ns: 881590510383 ns Nov 1 00:38:13.094888 kernel: Initialise system trusted keyrings Nov 1 00:38:13.094903 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:38:13.094917 kernel: Key type asymmetric registered Nov 1 00:38:13.094932 kernel: Asymmetric key parser 'x509' registered Nov 1 00:38:13.094946 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:38:13.094965 kernel: io scheduler mq-deadline registered Nov 1 00:38:13.094980 kernel: io scheduler kyber registered Nov 1 00:38:13.094994 kernel: io scheduler bfq registered Nov 1 00:38:13.095008 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:38:13.095023 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:38:13.095044 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:38:13.095059 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:38:13.095074 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:38:13.095089 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:38:13.095106 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:38:13.095121 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:38:13.095135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:38:13.095149 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:38:13.095354 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:38:13.095491 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:38:13.095624 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:38:12 UTC (1761957492) Nov 1 00:38:13.095781 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:38:13.095799 kernel: intel_pstate: CPU model not supported Nov 1 00:38:13.095814 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:38:13.095829 kernel: Segment Routing with IPv6 Nov 1 00:38:13.095843 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:38:13.095857 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:38:13.095871 kernel: Key type dns_resolver registered Nov 1 00:38:13.095886 kernel: IPI shorthand broadcast: enabled Nov 1 00:38:13.095900 kernel: sched_clock: Marking stable (859805668, 225969860)->(1327757347, -241981819) Nov 1 00:38:13.095915 kernel: registered taskstats version 1 Nov 1 00:38:13.095934 kernel: Loading compiled-in X.509 certificates Nov 1 00:38:13.095948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:38:13.095964 kernel: Key type .fscrypt registered Nov 1 00:38:13.095978 kernel: Key type fscrypt-provisioning registered Nov 1 00:38:13.095993 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:38:13.096008 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:38:13.096023 kernel: ima: No architecture policies found Nov 1 00:38:13.096037 kernel: clk: Disabling unused clocks Nov 1 00:38:13.096056 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:38:13.096077 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:38:13.096092 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:38:13.096107 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:38:13.096121 kernel: Run /init as init process Nov 1 00:38:13.096136 kernel: with arguments: Nov 1 00:38:13.096174 kernel: /init Nov 1 00:38:13.096192 kernel: with environment: Nov 1 00:38:13.096206 kernel: HOME=/ Nov 1 00:38:13.096223 kernel: TERM=linux Nov 1 00:38:13.096238 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:38:13.096258 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:38:13.096277 systemd[1]: Detected virtualization kvm. Nov 1 00:38:13.096293 systemd[1]: Detected architecture x86-64. Nov 1 00:38:13.096309 systemd[1]: Running in initrd. Nov 1 00:38:13.096324 systemd[1]: No hostname configured, using default hostname. Nov 1 00:38:13.096339 systemd[1]: Hostname set to . Nov 1 00:38:13.096372 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:38:13.096388 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:38:13.096403 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:38:13.096419 systemd[1]: Reached target cryptsetup.target. Nov 1 00:38:13.096448 systemd[1]: Reached target paths.target. Nov 1 00:38:13.096463 systemd[1]: Reached target slices.target. Nov 1 00:38:13.096478 systemd[1]: Reached target swap.target. Nov 1 00:38:13.096493 systemd[1]: Reached target timers.target. Nov 1 00:38:13.096521 systemd[1]: Listening on iscsid.socket. Nov 1 00:38:13.096536 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:38:13.096556 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:38:13.096572 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:38:13.096588 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:38:13.096604 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:38:13.096620 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:38:13.096636 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:38:13.096655 systemd[1]: Reached target sockets.target. Nov 1 00:38:13.096671 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:38:13.096710 systemd[1]: Finished network-cleanup.service. Nov 1 00:38:13.096726 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:38:13.096742 systemd[1]: Starting systemd-journald.service... Nov 1 00:38:13.096768 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:38:13.096784 systemd[1]: Starting systemd-resolved.service... Nov 1 00:38:13.096800 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:38:13.096816 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:38:13.096832 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:38:13.096856 systemd-journald[184]: Journal started Nov 1 00:38:13.096942 systemd-journald[184]: Runtime Journal (/run/log/journal/bb1db076211e4ef7a60ab29213085a57) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:38:13.063211 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:38:13.179255 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:38:13.179294 kernel: Bridge firewalling registered Nov 1 00:38:13.179313 systemd[1]: Started systemd-journald.service. Nov 1 00:38:13.179348 kernel: audit: type=1130 audit(1761957493.166:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.179368 kernel: SCSI subsystem initialized Nov 1 00:38:13.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.097558 systemd-resolved[186]: Positive Trust Anchors: Nov 1 00:38:13.097582 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:38:13.097636 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:38:13.102660 systemd-resolved[186]: Defaulting to hostname 'linux'. Nov 1 00:38:13.142640 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:38:13.210426 kernel: audit: type=1130 audit(1761957493.187:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.210465 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:38:13.210486 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:38:13.210505 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:38:13.210524 kernel: audit: type=1130 audit(1761957493.203:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.188055 systemd[1]: Started systemd-resolved.service. Nov 1 00:38:13.219856 kernel: audit: type=1130 audit(1761957493.210:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.205090 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:38:13.211280 systemd[1]: Reached target nss-lookup.target. Nov 1 00:38:13.211550 systemd-modules-load[185]: Inserted module 'dm_multipath' Nov 1 00:38:13.221671 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:38:13.223390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:38:13.226467 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:38:13.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.238755 kernel: audit: type=1130 audit(1761957493.230:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.240154 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:38:13.254178 kernel: audit: type=1130 audit(1761957493.241:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.243363 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:38:13.261211 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:38:13.279879 kernel: audit: type=1130 audit(1761957493.261:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.279257 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:38:13.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.281213 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:38:13.289538 kernel: audit: type=1130 audit(1761957493.279:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.295636 dracut-cmdline[207]: dracut-dracut-053 Nov 1 00:38:13.299993 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:38:13.396748 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:38:13.424753 kernel: iscsi: registered transport (tcp) Nov 1 00:38:13.459162 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:38:13.459256 kernel: QLogic iSCSI HBA Driver Nov 1 00:38:13.514999 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:38:13.523759 kernel: audit: type=1130 audit(1761957493.515:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.518064 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:38:13.592788 kernel: raid6: avx2x4 gen() 18186 MB/s Nov 1 00:38:13.610772 kernel: raid6: avx2x4 xor() 7340 MB/s Nov 1 00:38:13.628777 kernel: raid6: avx2x2 gen() 17087 MB/s Nov 1 00:38:13.646765 kernel: raid6: avx2x2 xor() 10215 MB/s Nov 1 00:38:13.664779 kernel: raid6: avx2x1 gen() 13816 MB/s Nov 1 00:38:13.682781 kernel: raid6: avx2x1 xor() 8260 MB/s Nov 1 00:38:13.700783 kernel: raid6: sse2x4 gen() 8299 MB/s Nov 1 00:38:13.718764 kernel: raid6: sse2x4 xor() 4195 MB/s Nov 1 00:38:13.736772 kernel: raid6: sse2x2 gen() 7935 MB/s Nov 1 00:38:13.754775 kernel: raid6: sse2x2 xor() 5166 MB/s Nov 1 00:38:13.772770 kernel: raid6: sse2x1 gen() 5885 MB/s Nov 1 00:38:13.792502 kernel: raid6: sse2x1 xor() 4256 MB/s Nov 1 00:38:13.792583 kernel: raid6: using algorithm avx2x4 gen() 18186 MB/s Nov 1 00:38:13.792595 kernel: raid6: .... xor() 7340 MB/s, rmw enabled Nov 1 00:38:13.794338 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:38:13.817757 kernel: xor: automatically using best checksumming function avx Nov 1 00:38:13.978741 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:38:13.991992 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:38:13.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:13.992000 audit: BPF prog-id=7 op=LOAD Nov 1 00:38:13.992000 audit: BPF prog-id=8 op=LOAD Nov 1 00:38:13.994125 systemd[1]: Starting systemd-udevd.service... Nov 1 00:38:14.011487 systemd-udevd[384]: Using default interface naming scheme 'v252'. Nov 1 00:38:14.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:14.017264 systemd[1]: Started systemd-udevd.service. Nov 1 00:38:14.019392 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:38:14.043134 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Nov 1 00:38:14.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:14.086824 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:38:14.088962 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:38:14.140084 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:38:14.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:14.202667 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:38:14.312740 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:38:14.312766 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:38:14.312779 kernel: GPT:9289727 != 125829119 Nov 1 00:38:14.312789 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:38:14.312926 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:38:14.312937 kernel: GPT:9289727 != 125829119 Nov 1 00:38:14.312949 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:38:14.312959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:38:14.312973 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:38:14.312984 kernel: AES CTR mode by8 optimization enabled Nov 1 00:38:14.315038 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:38:14.327722 kernel: ACPI: bus type USB registered Nov 1 00:38:14.327783 kernel: usbcore: registered new interface driver usbfs Nov 1 00:38:14.327798 kernel: usbcore: registered new interface driver hub Nov 1 00:38:14.327810 kernel: usbcore: registered new device driver usb Nov 1 00:38:14.328713 kernel: libata version 3.00 loaded. Nov 1 00:38:14.333726 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:38:14.387284 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (443) Nov 1 00:38:14.387305 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 1 00:38:14.387318 kernel: ehci-pci: EHCI PCI platform driver Nov 1 00:38:14.387330 kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 1 00:38:14.387341 kernel: scsi host1: ata_piix Nov 1 00:38:14.387504 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:38:14.387630 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:38:14.387758 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:38:14.387859 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Nov 1 00:38:14.387958 kernel: scsi host2: ata_piix Nov 1 00:38:14.388078 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:38:14.388090 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:38:14.388101 kernel: hub 1-0:1.0: USB hub found Nov 1 00:38:14.388261 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:38:14.364975 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:38:14.488749 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:38:14.489557 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:38:14.494167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:38:14.497912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:38:14.499606 systemd[1]: Starting disk-uuid.service... Nov 1 00:38:14.507253 disk-uuid[504]: Primary Header is updated. Nov 1 00:38:14.507253 disk-uuid[504]: Secondary Entries is updated. Nov 1 00:38:14.507253 disk-uuid[504]: Secondary Header is updated. Nov 1 00:38:14.514720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:38:14.520739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:38:14.527756 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:38:15.526729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:38:15.527074 disk-uuid[505]: The operation has completed successfully. Nov 1 00:38:15.576651 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:38:15.577970 systemd[1]: Finished disk-uuid.service. Nov 1 00:38:15.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.580229 systemd[1]: Starting verity-setup.service... Nov 1 00:38:15.600733 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:38:15.649431 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:38:15.651820 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:38:15.653652 systemd[1]: Finished verity-setup.service. Nov 1 00:38:15.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.755724 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:38:15.756585 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:38:15.758213 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:38:15.760263 systemd[1]: Starting ignition-setup.service... Nov 1 00:38:15.762670 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:38:15.779361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:38:15.779456 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:38:15.779470 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:38:15.794501 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:38:15.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.803363 systemd[1]: Finished ignition-setup.service. Nov 1 00:38:15.805308 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:38:15.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.903000 audit: BPF prog-id=9 op=LOAD Nov 1 00:38:15.902440 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:38:15.906101 systemd[1]: Starting systemd-networkd.service... Nov 1 00:38:15.934832 systemd-networkd[688]: lo: Link UP Nov 1 00:38:15.934840 systemd-networkd[688]: lo: Gained carrier Nov 1 00:38:15.935526 systemd-networkd[688]: Enumeration completed Nov 1 00:38:15.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.935970 systemd-networkd[688]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:38:15.937247 systemd-networkd[688]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:38:15.938359 systemd[1]: Started systemd-networkd.service. Nov 1 00:38:15.938612 systemd-networkd[688]: eth1: Link UP Nov 1 00:38:15.938619 systemd-networkd[688]: eth1: Gained carrier Nov 1 00:38:15.940004 systemd[1]: Reached target network.target. Nov 1 00:38:15.942038 systemd[1]: Starting iscsiuio.service... Nov 1 00:38:15.944465 systemd-networkd[688]: eth0: Link UP Nov 1 00:38:15.944470 systemd-networkd[688]: eth0: Gained carrier Nov 1 00:38:15.967795 ignition[622]: Ignition 2.14.0 Nov 1 00:38:15.967905 ignition[622]: Stage: fetch-offline Nov 1 00:38:15.968021 ignition[622]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:15.968058 ignition[622]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:15.972979 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:15.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.973838 systemd-networkd[688]: eth1: DHCPv4 address 10.124.0.29/20 acquired from 169.254.169.253 Nov 1 00:38:15.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.973186 ignition[622]: parsed url from cmdline: "" Nov 1 00:38:15.975117 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:38:15.973193 ignition[622]: no config URL provided Nov 1 00:38:15.977473 systemd[1]: Starting ignition-fetch.service... Nov 1 00:38:15.973202 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:38:15.978290 systemd[1]: Started iscsiuio.service. Nov 1 00:38:16.002310 iscsid[698]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:38:16.002310 iscsid[698]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:38:16.002310 iscsid[698]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:38:16.002310 iscsid[698]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:38:16.002310 iscsid[698]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:38:16.002310 iscsid[698]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:38:16.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.973216 ignition[622]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:38:15.985543 systemd-networkd[688]: eth0: DHCPv4 address 146.190.139.75/20, gateway 146.190.128.1 acquired from 169.254.169.253 Nov 1 00:38:15.973224 ignition[622]: failed to fetch config: resource requires networking Nov 1 00:38:15.995687 systemd[1]: Starting iscsid.service... Nov 1 00:38:15.973560 ignition[622]: Ignition finished successfully Nov 1 00:38:16.006328 systemd[1]: Started iscsid.service. Nov 1 00:38:15.993846 ignition[692]: Ignition 2.14.0 Nov 1 00:38:16.008764 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:38:15.993854 ignition[692]: Stage: fetch Nov 1 00:38:15.993979 ignition[692]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:15.993997 ignition[692]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:15.995886 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:15.996000 ignition[692]: parsed url from cmdline: "" Nov 1 00:38:16.033048 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:38:16.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:15.996004 ignition[692]: no config URL provided Nov 1 00:38:16.034013 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:38:15.996010 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:38:16.035255 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:38:15.996018 ignition[692]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:38:16.038662 systemd[1]: Reached target remote-fs.target. Nov 1 00:38:15.996052 ignition[692]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:38:16.043655 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:38:16.036465 ignition[692]: GET result: OK Nov 1 00:38:16.036598 ignition[692]: parsing config with SHA512: 5018a596851ec9d4513107913ab6b6cf38311e39f19914255bd92c07ca1f3ed936794f5cd9a269d07bdbbdb5454253516af88dbd7e491c17344563668741c6fa Nov 1 00:38:16.055384 unknown[692]: fetched base config from "system" Nov 1 00:38:16.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.056154 ignition[692]: fetch: fetch complete Nov 1 00:38:16.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.055396 unknown[692]: fetched base config from "system" Nov 1 00:38:16.056161 ignition[692]: fetch: fetch passed Nov 1 00:38:16.055402 unknown[692]: fetched user config from "digitalocean" Nov 1 00:38:16.056219 ignition[692]: Ignition finished successfully Nov 1 00:38:16.057788 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:38:16.058891 systemd[1]: Finished ignition-fetch.service. Nov 1 00:38:16.061475 systemd[1]: Starting ignition-kargs.service... Nov 1 00:38:16.076344 ignition[713]: Ignition 2.14.0 Nov 1 00:38:16.076377 ignition[713]: Stage: kargs Nov 1 00:38:16.076570 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:16.076597 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:16.078686 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:16.080670 ignition[713]: kargs: kargs passed Nov 1 00:38:16.080761 ignition[713]: Ignition finished successfully Nov 1 00:38:16.081684 systemd[1]: Finished ignition-kargs.service. Nov 1 00:38:16.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.084080 systemd[1]: Starting ignition-disks.service... Nov 1 00:38:16.096024 ignition[718]: Ignition 2.14.0 Nov 1 00:38:16.096037 ignition[718]: Stage: disks Nov 1 00:38:16.096167 ignition[718]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:16.096193 ignition[718]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:16.098076 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:16.099741 ignition[718]: disks: disks passed Nov 1 00:38:16.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.100650 systemd[1]: Finished ignition-disks.service. Nov 1 00:38:16.099802 ignition[718]: Ignition finished successfully Nov 1 00:38:16.102800 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:38:16.103577 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:38:16.105007 systemd[1]: Reached target local-fs.target. Nov 1 00:38:16.106443 systemd[1]: Reached target sysinit.target. Nov 1 00:38:16.107822 systemd[1]: Reached target basic.target. Nov 1 00:38:16.110165 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:38:16.126585 systemd-fsck[726]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:38:16.130412 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:38:16.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.132551 systemd[1]: Mounting sysroot.mount... Nov 1 00:38:16.146729 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:38:16.147887 systemd[1]: Mounted sysroot.mount. Nov 1 00:38:16.148777 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:38:16.151545 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:38:16.153226 systemd[1]: Starting flatcar-digitalocean-network.service... Nov 1 00:38:16.155498 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:38:16.159485 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:38:16.159525 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:38:16.162087 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:38:16.165535 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:38:16.177495 initrd-setup-root[738]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:38:16.192237 initrd-setup-root[746]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:38:16.206387 initrd-setup-root[756]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:38:16.220661 initrd-setup-root[766]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:38:16.293742 coreos-metadata[732]: Nov 01 00:38:16.293 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:38:16.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.298429 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:38:16.300244 systemd[1]: Starting ignition-mount.service... Nov 1 00:38:16.303915 systemd[1]: Starting sysroot-boot.service... Nov 1 00:38:16.309496 coreos-metadata[732]: Nov 01 00:38:16.309 INFO Fetch successful Nov 1 00:38:16.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.321186 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:38:16.321290 systemd[1]: Finished flatcar-digitalocean-network.service. Nov 1 00:38:16.324980 bash[784]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:38:16.340819 coreos-metadata[733]: Nov 01 00:38:16.340 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:38:16.344122 ignition[785]: INFO : Ignition 2.14.0 Nov 1 00:38:16.345287 ignition[785]: INFO : Stage: mount Nov 1 00:38:16.346823 ignition[785]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:16.348023 ignition[785]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:16.354217 ignition[785]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:16.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.352144 systemd[1]: Finished sysroot-boot.service. Nov 1 00:38:16.358362 coreos-metadata[733]: Nov 01 00:38:16.354 INFO Fetch successful Nov 1 00:38:16.359386 ignition[785]: INFO : mount: mount passed Nov 1 00:38:16.359386 ignition[785]: INFO : Ignition finished successfully Nov 1 00:38:16.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:16.361969 systemd[1]: Finished ignition-mount.service. Nov 1 00:38:16.364651 coreos-metadata[733]: Nov 01 00:38:16.361 INFO wrote hostname ci-3510.3.8-n-368ce9a156 to /sysroot/etc/hostname Nov 1 00:38:16.362962 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:38:16.668245 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:38:16.678746 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (792) Nov 1 00:38:16.690746 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:38:16.690830 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:38:16.690843 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:38:16.697899 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:38:16.699571 systemd[1]: Starting ignition-files.service... Nov 1 00:38:16.720408 ignition[812]: INFO : Ignition 2.14.0 Nov 1 00:38:16.720408 ignition[812]: INFO : Stage: files Nov 1 00:38:16.722745 ignition[812]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:16.722745 ignition[812]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:16.722745 ignition[812]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:16.728133 ignition[812]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:38:16.728133 ignition[812]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:38:16.728133 ignition[812]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:38:16.733767 ignition[812]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:38:16.733767 ignition[812]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:38:16.733767 ignition[812]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:38:16.732985 unknown[812]: wrote ssh authorized keys file for user: core Nov 1 00:38:16.739813 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:38:16.739813 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:38:16.782376 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:38:16.849503 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:38:16.851343 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:38:16.851343 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 00:38:17.048052 systemd-networkd[688]: eth0: Gained IPv6LL Nov 1 00:38:17.064138 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:38:17.177296 systemd-networkd[688]: eth1: Gained IPv6LL Nov 1 00:38:17.183151 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:38:17.184968 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:38:17.186530 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:38:17.186530 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:38:17.186530 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:38:17.186530 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:38:17.192619 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:38:17.534919 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:38:17.878594 ignition[812]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:38:17.880471 ignition[812]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:38:17.881517 ignition[812]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:38:17.882491 ignition[812]: INFO : files: op(d): [started] processing unit "prepare-helm.service" Nov 1 00:38:17.883868 ignition[812]: INFO : files: op(d): op(e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(d): op(e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(d): [finished] processing unit "prepare-helm.service" Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:38:17.886759 ignition[812]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:38:17.906830 kernel: kauditd_printk_skb: 28 callbacks suppressed Nov 1 00:38:17.906862 kernel: audit: type=1130 audit(1761957497.893:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.893617 systemd[1]: Finished ignition-files.service. Nov 1 00:38:17.908004 ignition[812]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:38:17.908004 ignition[812]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:38:17.908004 ignition[812]: INFO : files: files passed Nov 1 00:38:17.908004 ignition[812]: INFO : Ignition finished successfully Nov 1 00:38:17.931513 kernel: audit: type=1130 audit(1761957497.909:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.931547 kernel: audit: type=1131 audit(1761957497.909:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.931560 kernel: audit: type=1130 audit(1761957497.924:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.895434 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:38:17.903836 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:38:17.935248 initrd-setup-root-after-ignition[837]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:38:17.904992 systemd[1]: Starting ignition-quench.service... Nov 1 00:38:17.908987 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:38:17.909102 systemd[1]: Finished ignition-quench.service. Nov 1 00:38:17.923133 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:38:17.925015 systemd[1]: Reached target ignition-complete.target. Nov 1 00:38:17.933201 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:38:17.951152 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:38:17.951907 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:38:17.965052 kernel: audit: type=1130 audit(1761957497.952:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.965089 kernel: audit: type=1131 audit(1761957497.952:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.952919 systemd[1]: Reached target initrd-fs.target. Nov 1 00:38:17.965805 systemd[1]: Reached target initrd.target. Nov 1 00:38:17.967106 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:38:17.968087 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:38:17.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.992815 kernel: audit: type=1130 audit(1761957497.984:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:17.984796 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:38:17.986387 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:38:18.000864 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:38:18.001818 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:38:18.003456 systemd[1]: Stopped target timers.target. Nov 1 00:38:18.005028 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:38:18.021206 kernel: audit: type=1131 audit(1761957498.012:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.005192 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:38:18.013327 systemd[1]: Stopped target initrd.target. Nov 1 00:38:18.021959 systemd[1]: Stopped target basic.target. Nov 1 00:38:18.023450 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:38:18.025094 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:38:18.026549 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:38:18.028179 systemd[1]: Stopped target remote-fs.target. Nov 1 00:38:18.029777 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:38:18.031438 systemd[1]: Stopped target sysinit.target. Nov 1 00:38:18.032905 systemd[1]: Stopped target local-fs.target. Nov 1 00:38:18.034401 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:38:18.035903 systemd[1]: Stopped target swap.target. Nov 1 00:38:18.046538 kernel: audit: type=1131 audit(1761957498.038:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.037283 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:38:18.037455 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:38:18.056395 kernel: audit: type=1131 audit(1761957498.048:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.038978 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:38:18.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.047347 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:38:18.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.047525 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:38:18.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.049024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:38:18.049249 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:38:18.057328 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:38:18.057519 systemd[1]: Stopped ignition-files.service. Nov 1 00:38:18.058850 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:38:18.077584 ignition[850]: INFO : Ignition 2.14.0 Nov 1 00:38:18.077584 ignition[850]: INFO : Stage: umount Nov 1 00:38:18.077584 ignition[850]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:38:18.077584 ignition[850]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:38:18.077584 ignition[850]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:38:18.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.058999 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:38:18.095173 ignition[850]: INFO : umount: umount passed Nov 1 00:38:18.095173 ignition[850]: INFO : Ignition finished successfully Nov 1 00:38:18.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.061730 systemd[1]: Stopping ignition-mount.service... Nov 1 00:38:18.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.071467 systemd[1]: Stopping iscsiuio.service... Nov 1 00:38:18.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.076281 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:38:18.076506 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:38:18.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.079185 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:38:18.085521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:38:18.085734 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:38:18.088169 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:38:18.088315 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:38:18.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.091007 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:38:18.091113 systemd[1]: Stopped iscsiuio.service. Nov 1 00:38:18.092444 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:38:18.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.092532 systemd[1]: Stopped ignition-mount.service. Nov 1 00:38:18.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.098939 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:38:18.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.099050 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:38:18.102112 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:38:18.102211 systemd[1]: Stopped ignition-disks.service. Nov 1 00:38:18.103096 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:38:18.103163 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:38:18.104068 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:38:18.104129 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:38:18.105079 systemd[1]: Stopped target network.target. Nov 1 00:38:18.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.105929 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:38:18.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.106008 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:38:18.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.107579 systemd[1]: Stopped target paths.target. Nov 1 00:38:18.109463 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:38:18.227000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:38:18.118831 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:38:18.120061 systemd[1]: Stopped target slices.target. Nov 1 00:38:18.120983 systemd[1]: Stopped target sockets.target. Nov 1 00:38:18.121890 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:38:18.121935 systemd[1]: Closed iscsid.socket. Nov 1 00:38:18.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.122618 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:38:18.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.122673 systemd[1]: Closed iscsiuio.socket. Nov 1 00:38:18.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.123378 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:38:18.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.123438 systemd[1]: Stopped ignition-setup.service. Nov 1 00:38:18.125321 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:38:18.133931 systemd-networkd[688]: eth1: DHCPv6 lease lost Nov 1 00:38:18.135516 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:38:18.145391 systemd-networkd[688]: eth0: DHCPv6 lease lost Nov 1 00:38:18.145801 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:38:18.148129 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:38:18.251000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:38:18.148297 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:38:18.151085 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:38:18.151151 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:38:18.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.165731 systemd[1]: Stopping network-cleanup.service... Nov 1 00:38:18.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.170061 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:38:18.170169 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:38:18.181767 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:38:18.181862 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:38:18.183777 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:38:18.183847 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:38:18.185187 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:38:18.202138 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:38:18.203028 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:38:18.205808 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:38:18.222481 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:38:18.222673 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:38:18.225209 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:38:18.225349 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:38:18.228120 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:38:18.228189 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:38:18.232346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:38:18.232425 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:38:18.234071 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:38:18.234154 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:38:18.235838 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:38:18.235908 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:38:18.237370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:38:18.237436 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:38:18.239156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:38:18.301225 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 1 00:38:18.301301 iscsid[698]: iscsid shutting down. Nov 1 00:38:18.239217 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:38:18.241950 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:38:18.243469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:38:18.243562 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:38:18.254333 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:38:18.254475 systemd[1]: Stopped network-cleanup.service. Nov 1 00:38:18.256666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:38:18.257045 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:38:18.258365 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:38:18.260985 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:38:18.278156 systemd[1]: Switching root. Nov 1 00:38:18.313408 systemd-journald[184]: Journal stopped Nov 1 00:38:22.135940 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:38:22.136008 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:38:22.136036 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:38:22.136052 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:38:22.136064 kernel: SELinux: policy capability open_perms=1 Nov 1 00:38:22.136076 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:38:22.136093 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:38:22.136105 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:38:22.136117 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:38:22.136129 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:38:22.136141 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:38:22.136156 systemd[1]: Successfully loaded SELinux policy in 58.055ms. Nov 1 00:38:22.136181 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.833ms. Nov 1 00:38:22.136195 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:38:22.136209 systemd[1]: Detected virtualization kvm. Nov 1 00:38:22.136221 systemd[1]: Detected architecture x86-64. Nov 1 00:38:22.136234 systemd[1]: Detected first boot. Nov 1 00:38:22.136246 systemd[1]: Hostname set to . Nov 1 00:38:22.136259 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:38:22.136279 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:38:22.136291 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:38:22.136304 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:38:22.136319 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:38:22.136332 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:38:22.136346 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:38:22.136374 systemd[1]: Stopped iscsid.service. Nov 1 00:38:22.136404 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:38:22.136417 systemd[1]: Stopped initrd-switch-root.service. Nov 1 00:38:22.136429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:38:22.136441 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:38:22.136455 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:38:22.136467 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:38:22.136480 systemd[1]: Created slice system-getty.slice. Nov 1 00:38:22.136493 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:38:22.136505 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:38:22.136525 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:38:22.136537 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:38:22.136550 systemd[1]: Created slice user.slice. Nov 1 00:38:22.136563 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:38:22.136576 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:38:22.136589 systemd[1]: Set up automount boot.automount. Nov 1 00:38:22.136608 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:38:22.136620 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 00:38:22.136632 systemd[1]: Stopped target initrd-fs.target. Nov 1 00:38:22.136645 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 00:38:22.136657 systemd[1]: Reached target integritysetup.target. Nov 1 00:38:22.136669 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:38:22.136682 systemd[1]: Reached target remote-fs.target. Nov 1 00:38:22.136738 systemd[1]: Reached target slices.target. Nov 1 00:38:22.136751 systemd[1]: Reached target swap.target. Nov 1 00:38:22.136764 systemd[1]: Reached target torcx.target. Nov 1 00:38:22.136783 systemd[1]: Reached target veritysetup.target. Nov 1 00:38:22.136796 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:38:22.136808 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:38:22.136820 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:38:22.136833 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:38:22.136845 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:38:22.136858 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:38:22.136870 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:38:22.136883 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:38:22.136902 systemd[1]: Mounting media.mount... Nov 1 00:38:22.136915 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:22.136928 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:38:22.136940 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:38:22.136952 systemd[1]: Mounting tmp.mount... Nov 1 00:38:22.136967 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:38:22.136979 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:22.136991 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:38:22.137004 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:38:22.137022 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:38:22.137034 systemd[1]: Starting modprobe@drm.service... Nov 1 00:38:22.137046 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:38:22.137059 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:38:22.137071 systemd[1]: Starting modprobe@loop.service... Nov 1 00:38:22.137084 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:38:22.137097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:38:22.137109 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 00:38:22.137121 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:38:22.137139 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:38:22.137151 systemd[1]: Stopped systemd-journald.service. Nov 1 00:38:22.137164 systemd[1]: Starting systemd-journald.service... Nov 1 00:38:22.137176 kernel: fuse: init (API version 7.34) Nov 1 00:38:22.137188 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:38:22.137200 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:38:22.137212 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:38:22.137225 kernel: loop: module loaded Nov 1 00:38:22.137240 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:38:22.137258 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:38:22.137271 systemd[1]: Stopped verity-setup.service. Nov 1 00:38:22.137283 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:22.137296 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:38:22.137308 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:38:22.137324 systemd-journald[965]: Journal started Nov 1 00:38:22.137373 systemd-journald[965]: Runtime Journal (/run/log/journal/bb1db076211e4ef7a60ab29213085a57) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:38:18.452000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:38:18.518000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:38:18.518000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:38:18.518000 audit: BPF prog-id=10 op=LOAD Nov 1 00:38:18.518000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:38:18.518000 audit: BPF prog-id=11 op=LOAD Nov 1 00:38:18.518000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:38:18.627000 audit[883]: AVC avc: denied { associate } for pid=883 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:38:18.627000 audit[883]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d892 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=866 pid=883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:18.627000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:38:18.631000 audit[883]: AVC avc: denied { associate } for pid=883 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:38:18.631000 audit[883]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d969 a2=1ed a3=0 items=2 ppid=866 pid=883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:18.631000 audit: CWD cwd="/" Nov 1 00:38:18.631000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:18.631000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:18.631000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:38:21.886000 audit: BPF prog-id=12 op=LOAD Nov 1 00:38:21.886000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:38:21.886000 audit: BPF prog-id=13 op=LOAD Nov 1 00:38:21.886000 audit: BPF prog-id=14 op=LOAD Nov 1 00:38:21.886000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:38:21.886000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:38:21.887000 audit: BPF prog-id=15 op=LOAD Nov 1 00:38:21.887000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:38:21.887000 audit: BPF prog-id=16 op=LOAD Nov 1 00:38:21.887000 audit: BPF prog-id=17 op=LOAD Nov 1 00:38:21.887000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:38:21.887000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:38:21.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:21.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:21.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:21.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:21.901000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:38:22.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.074000 audit: BPF prog-id=18 op=LOAD Nov 1 00:38:22.074000 audit: BPF prog-id=19 op=LOAD Nov 1 00:38:22.074000 audit: BPF prog-id=20 op=LOAD Nov 1 00:38:22.074000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:38:22.075000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:38:22.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.133000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:38:22.133000 audit[965]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe638cefe0 a2=4000 a3=7ffe638cf07c items=0 ppid=1 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:22.133000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:38:18.624222 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:38:21.884182 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:38:18.625070 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:38:21.884197 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:38:18.625093 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:38:21.888507 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:38:18.625131 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 00:38:18.625142 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 00:38:18.625182 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 00:38:18.625197 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 00:38:18.625417 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 00:38:18.625464 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 00:38:18.625480 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 00:38:18.627253 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 00:38:22.141721 systemd[1]: Started systemd-journald.service. Nov 1 00:38:18.627293 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 00:38:18.627316 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 00:38:22.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:18.627332 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 00:38:22.143060 systemd[1]: Mounted media.mount. Nov 1 00:38:18.627355 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 00:38:18.627370 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 00:38:21.491030 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:38:21.491358 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:38:21.491511 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:38:21.491941 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 00:38:22.143968 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:38:21.492006 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 00:38:21.492090 /usr/lib/systemd/system-generators/torcx-generator[883]: time="2025-11-01T00:38:21Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 00:38:22.144929 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:38:22.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.145674 systemd[1]: Mounted tmp.mount. Nov 1 00:38:22.146537 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:38:22.147456 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:38:22.148405 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:38:22.148588 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:38:22.149604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:38:22.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.150557 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:38:22.151734 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:38:22.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.152119 systemd[1]: Finished modprobe@drm.service. Nov 1 00:38:22.153126 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:38:22.153508 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:38:22.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.154474 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:38:22.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.155020 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:38:22.156130 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:38:22.156401 systemd[1]: Finished modprobe@loop.service. Nov 1 00:38:22.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.157601 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:38:22.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.158672 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:38:22.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.159843 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:38:22.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.161190 systemd[1]: Reached target network-pre.target. Nov 1 00:38:22.163360 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:38:22.169573 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:38:22.173341 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:38:22.175427 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:38:22.177437 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:38:22.179309 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:38:22.183760 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:38:22.184928 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:38:22.186359 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:38:22.189276 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:38:22.194814 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:38:22.196614 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:38:22.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.200985 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:38:22.201993 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:38:22.206909 systemd-journald[965]: Time spent on flushing to /var/log/journal/bb1db076211e4ef7a60ab29213085a57 is 26.700ms for 1155 entries. Nov 1 00:38:22.206909 systemd-journald[965]: System Journal (/var/log/journal/bb1db076211e4ef7a60ab29213085a57) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:38:22.240575 systemd-journald[965]: Received client request to flush runtime journal. Nov 1 00:38:22.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.228918 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:38:22.241680 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:38:22.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.253829 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:38:22.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.264652 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:38:22.266850 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:38:22.279355 udevadm[993]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:38:22.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.765999 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:38:22.767000 audit: BPF prog-id=21 op=LOAD Nov 1 00:38:22.767000 audit: BPF prog-id=22 op=LOAD Nov 1 00:38:22.767000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:38:22.767000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:38:22.769091 systemd[1]: Starting systemd-udevd.service... Nov 1 00:38:22.790964 systemd-udevd[994]: Using default interface naming scheme 'v252'. Nov 1 00:38:22.818865 systemd[1]: Started systemd-udevd.service. Nov 1 00:38:22.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.820000 audit: BPF prog-id=23 op=LOAD Nov 1 00:38:22.822494 systemd[1]: Starting systemd-networkd.service... Nov 1 00:38:22.831000 audit: BPF prog-id=24 op=LOAD Nov 1 00:38:22.831000 audit: BPF prog-id=25 op=LOAD Nov 1 00:38:22.831000 audit: BPF prog-id=26 op=LOAD Nov 1 00:38:22.833323 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:38:22.883189 systemd[1]: Started systemd-userdbd.service. Nov 1 00:38:22.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.903431 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:22.903728 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:22.905651 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:38:22.909620 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:38:22.913357 systemd[1]: Starting modprobe@loop.service... Nov 1 00:38:22.914116 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:38:22.914232 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:38:22.914358 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:22.914947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:38:22.915339 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:38:22.924346 kernel: kauditd_printk_skb: 112 callbacks suppressed Nov 1 00:38:22.924473 kernel: audit: type=1130 audit(1761957502.915:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.916372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:38:22.916501 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:38:22.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.925778 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:38:22.925998 systemd[1]: Finished modprobe@loop.service. Nov 1 00:38:22.934729 kernel: audit: type=1131 audit(1761957502.915:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.936974 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:38:22.937032 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:38:22.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.946736 kernel: audit: type=1130 audit(1761957502.924:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.957741 kernel: audit: type=1131 audit(1761957502.924:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.975749 kernel: audit: type=1130 audit(1761957502.933:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.975953 kernel: audit: type=1131 audit(1761957502.933:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:22.979469 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Nov 1 00:38:23.031856 systemd-networkd[997]: lo: Link UP Nov 1 00:38:23.031872 systemd-networkd[997]: lo: Gained carrier Nov 1 00:38:23.032588 systemd-networkd[997]: Enumeration completed Nov 1 00:38:23.032747 systemd[1]: Started systemd-networkd.service. Nov 1 00:38:23.041919 kernel: audit: type=1130 audit(1761957503.032:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.033764 systemd-networkd[997]: eth1: Configuring with /run/systemd/network/10-32:e0:dd:e8:c8:f6.network. Nov 1 00:38:23.042959 systemd-networkd[997]: eth0: Configuring with /run/systemd/network/10-da:fe:e3:cb:22:fb.network. Nov 1 00:38:23.043843 systemd-networkd[997]: eth1: Link UP Nov 1 00:38:23.043855 systemd-networkd[997]: eth1: Gained carrier Nov 1 00:38:23.050112 systemd-networkd[997]: eth0: Link UP Nov 1 00:38:23.050126 systemd-networkd[997]: eth0: Gained carrier Nov 1 00:38:23.079767 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:38:23.080513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:38:23.098793 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:38:23.112000 audit[1002]: AVC avc: denied { confidentiality } for pid=1002 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:38:23.122721 kernel: audit: type=1400 audit(1761957503.112:159): avc: denied { confidentiality } for pid=1002 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:38:23.112000 audit[1002]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5612e5b72ed0 a1=338ec a2=7f7da6f83bc5 a3=5 items=110 ppid=994 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:23.142714 kernel: audit: type=1300 audit(1761957503.112:159): arch=c000003e syscall=175 success=yes exit=0 a0=5612e5b72ed0 a1=338ec a2=7f7da6f83bc5 a3=5 items=110 ppid=994 pid=1002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:23.112000 audit: CWD cwd="/" Nov 1 00:38:23.112000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=1 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=2 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=3 name=(null) inode=13178 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=4 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=5 name=(null) inode=13179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=6 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=7 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=8 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=9 name=(null) inode=13181 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=10 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=11 name=(null) inode=13182 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=12 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=13 name=(null) inode=13183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.161711 kernel: audit: type=1307 audit(1761957503.112:159): cwd="/" Nov 1 00:38:23.112000 audit: PATH item=14 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=15 name=(null) inode=13184 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=16 name=(null) inode=13180 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=17 name=(null) inode=13185 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=18 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=19 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=20 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=21 name=(null) inode=13187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=22 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=23 name=(null) inode=13188 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=24 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=25 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=26 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=27 name=(null) inode=13190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=28 name=(null) inode=13186 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=29 name=(null) inode=13191 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=30 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=31 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=32 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=33 name=(null) inode=13193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=34 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=35 name=(null) inode=13194 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=36 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=37 name=(null) inode=13195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=38 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=39 name=(null) inode=13196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=40 name=(null) inode=13192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=41 name=(null) inode=13197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=42 name=(null) inode=13177 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=43 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=44 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=45 name=(null) inode=13199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=46 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=47 name=(null) inode=13200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=48 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=49 name=(null) inode=13201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=50 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=51 name=(null) inode=13202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=52 name=(null) inode=13198 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=53 name=(null) inode=13203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=55 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=56 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=57 name=(null) inode=13205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=58 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=59 name=(null) inode=13206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=60 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=61 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=62 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=63 name=(null) inode=13208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=64 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=65 name=(null) inode=13209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=66 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=67 name=(null) inode=13210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=68 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=69 name=(null) inode=13211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=70 name=(null) inode=13207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=71 name=(null) inode=13212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=72 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=73 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=74 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=75 name=(null) inode=13214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=76 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=77 name=(null) inode=13215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=78 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=79 name=(null) inode=13216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=80 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=81 name=(null) inode=13217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=82 name=(null) inode=13213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=83 name=(null) inode=13218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=84 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=85 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=86 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=87 name=(null) inode=13220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=88 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=89 name=(null) inode=13221 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=90 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=91 name=(null) inode=13222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=92 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=93 name=(null) inode=13223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=94 name=(null) inode=13219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=95 name=(null) inode=13224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=96 name=(null) inode=13204 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=97 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=98 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=99 name=(null) inode=13226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=100 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=101 name=(null) inode=13227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=102 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=103 name=(null) inode=13228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=104 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=105 name=(null) inode=13229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=106 name=(null) inode=13225 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=107 name=(null) inode=13230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PATH item=109 name=(null) inode=14332 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:38:23.112000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:38:23.180303 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:38:23.189744 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:38:23.194725 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:38:23.352748 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:38:23.378397 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:38:23.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.381460 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:38:23.404162 lvm[1032]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:38:23.433486 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:38:23.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.434602 systemd[1]: Reached target cryptsetup.target. Nov 1 00:38:23.436941 systemd[1]: Starting lvm2-activation.service... Nov 1 00:38:23.444252 lvm[1033]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:38:23.473220 systemd[1]: Finished lvm2-activation.service. Nov 1 00:38:23.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.474171 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:38:23.476548 systemd[1]: Mounting media-configdrive.mount... Nov 1 00:38:23.477389 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:38:23.477431 systemd[1]: Reached target machines.target. Nov 1 00:38:23.479494 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:38:23.493892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:38:23.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.499744 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:38:23.502152 systemd[1]: Mounted media-configdrive.mount. Nov 1 00:38:23.503303 systemd[1]: Reached target local-fs.target. Nov 1 00:38:23.505488 systemd[1]: Starting ldconfig.service... Nov 1 00:38:23.506843 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:38:23.506910 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:23.508169 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:38:23.515770 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:38:23.518209 systemd[1]: Starting systemd-sysext.service... Nov 1 00:38:23.521670 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1039 (bootctl) Nov 1 00:38:23.523248 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:38:23.557081 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:38:23.574689 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:38:23.574935 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:38:23.608173 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:38:23.610905 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:38:23.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.613444 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:38:23.652128 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:38:23.674385 systemd-fsck[1045]: fsck.fat 4.2 (2021-01-31) Nov 1 00:38:23.674385 systemd-fsck[1045]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:38:23.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.685436 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:38:23.678485 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:38:23.684273 systemd[1]: Mounting boot.mount... Nov 1 00:38:23.708936 (sd-sysext)[1050]: Using extensions 'kubernetes'. Nov 1 00:38:23.713287 systemd[1]: Mounted boot.mount. Nov 1 00:38:23.714343 (sd-sysext)[1050]: Merged extensions into '/usr'. Nov 1 00:38:23.761947 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:38:23.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.763649 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:23.765489 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:38:23.766607 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:23.769224 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:38:23.771978 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:38:23.774183 systemd[1]: Starting modprobe@loop.service... Nov 1 00:38:23.775123 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:38:23.775242 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:23.775350 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:23.776343 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:38:23.776534 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:38:23.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.778479 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:38:23.778632 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:38:23.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.780747 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:38:23.780893 systemd[1]: Finished modprobe@loop.service. Nov 1 00:38:23.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.785585 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:38:23.787443 systemd[1]: Finished systemd-sysext.service. Nov 1 00:38:23.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:23.790811 systemd[1]: Starting ensure-sysext.service... Nov 1 00:38:23.791550 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:38:23.791621 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:38:23.792985 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:38:23.801283 systemd[1]: Reloading. Nov 1 00:38:23.837825 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:38:23.844296 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:38:23.851485 systemd-tmpfiles[1058]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:38:23.953882 ldconfig[1038]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:38:23.961333 /usr/lib/systemd/system-generators/torcx-generator[1077]: time="2025-11-01T00:38:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:38:23.961365 /usr/lib/systemd/system-generators/torcx-generator[1077]: time="2025-11-01T00:38:23Z" level=info msg="torcx already run" Nov 1 00:38:24.071985 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:38:24.072018 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:38:24.093460 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:38:24.151000 audit: BPF prog-id=27 op=LOAD Nov 1 00:38:24.152000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:38:24.152000 audit: BPF prog-id=28 op=LOAD Nov 1 00:38:24.152000 audit: BPF prog-id=29 op=LOAD Nov 1 00:38:24.152000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:38:24.152000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:38:24.152000 audit: BPF prog-id=30 op=LOAD Nov 1 00:38:24.152000 audit: BPF prog-id=31 op=LOAD Nov 1 00:38:24.152000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:38:24.152000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:38:24.153000 audit: BPF prog-id=32 op=LOAD Nov 1 00:38:24.153000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:38:24.156000 audit: BPF prog-id=33 op=LOAD Nov 1 00:38:24.156000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:38:24.156000 audit: BPF prog-id=34 op=LOAD Nov 1 00:38:24.157000 audit: BPF prog-id=35 op=LOAD Nov 1 00:38:24.157000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:38:24.157000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:38:24.160304 systemd[1]: Finished ldconfig.service. Nov 1 00:38:24.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.162857 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:38:24.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.168709 systemd[1]: Starting audit-rules.service... Nov 1 00:38:24.171012 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:38:24.173668 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:38:24.177000 audit: BPF prog-id=36 op=LOAD Nov 1 00:38:24.180958 systemd[1]: Starting systemd-resolved.service... Nov 1 00:38:24.182000 audit: BPF prog-id=37 op=LOAD Nov 1 00:38:24.184129 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:38:24.188519 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:38:24.190105 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:38:24.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.193350 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:38:24.197451 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.200136 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:38:24.205249 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:38:24.208151 systemd[1]: Starting modprobe@loop.service... Nov 1 00:38:24.209838 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.209988 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:24.210110 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:38:24.212311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:38:24.214042 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:38:24.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.215286 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:38:24.215425 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:38:24.215909 systemd-networkd[997]: eth0: Gained IPv6LL Nov 1 00:38:24.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.217249 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:38:24.217377 systemd[1]: Finished modprobe@loop.service. Nov 1 00:38:24.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.221295 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.224577 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:38:24.225000 audit[1133]: SYSTEM_BOOT pid=1133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.227796 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:38:24.231361 systemd[1]: Starting modprobe@loop.service... Nov 1 00:38:24.232154 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.232320 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:24.232490 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:38:24.239603 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.242715 systemd[1]: Starting modprobe@drm.service... Nov 1 00:38:24.243987 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.244229 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:24.246753 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:38:24.247764 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:38:24.249120 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:38:24.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.251352 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:38:24.251491 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:38:24.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.253360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:38:24.253509 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:38:24.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.255303 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:38:24.255435 systemd[1]: Finished modprobe@loop.service. Nov 1 00:38:24.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.257232 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:38:24.257361 systemd[1]: Finished modprobe@drm.service. Nov 1 00:38:24.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.261196 systemd[1]: Finished ensure-sysext.service. Nov 1 00:38:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.262978 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:38:24.263091 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:38:24.277985 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:38:24.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:38:24.292000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:38:24.292000 audit[1153]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd8b8c58f0 a2=420 a3=0 items=0 ppid=1125 pid=1153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:38:24.292000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:38:24.294033 augenrules[1153]: No rules Nov 1 00:38:24.295079 systemd[1]: Finished audit-rules.service. Nov 1 00:38:24.306632 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:38:24.309560 systemd[1]: Starting systemd-update-done.service... Nov 1 00:38:24.320769 systemd[1]: Finished systemd-update-done.service. Nov 1 00:38:24.332934 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:38:24.333803 systemd[1]: Reached target time-set.target. Nov 1 00:38:24.337560 systemd-resolved[1129]: Positive Trust Anchors: Nov 1 00:38:24.337577 systemd-resolved[1129]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:38:24.337607 systemd-resolved[1129]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:38:25.058950 systemd-timesyncd[1132]: Contacted time server 198.137.202.32:123 (0.flatcar.pool.ntp.org). Nov 1 00:38:25.059458 systemd-timesyncd[1132]: Initial clock synchronization to Sat 2025-11-01 00:38:25.058809 UTC. Nov 1 00:38:25.062911 systemd-resolved[1129]: Using system hostname 'ci-3510.3.8-n-368ce9a156'. Nov 1 00:38:25.065077 systemd[1]: Started systemd-resolved.service. Nov 1 00:38:25.065937 systemd[1]: Reached target network.target. Nov 1 00:38:25.066663 systemd[1]: Reached target network-online.target. Nov 1 00:38:25.067322 systemd[1]: Reached target nss-lookup.target. Nov 1 00:38:25.067981 systemd[1]: Reached target sysinit.target. Nov 1 00:38:25.068736 systemd[1]: Started motdgen.path. Nov 1 00:38:25.069364 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:38:25.070526 systemd[1]: Started logrotate.timer. Nov 1 00:38:25.071227 systemd[1]: Started mdadm.timer. Nov 1 00:38:25.071886 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:38:25.072577 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:38:25.072609 systemd[1]: Reached target paths.target. Nov 1 00:38:25.073257 systemd[1]: Reached target timers.target. Nov 1 00:38:25.074336 systemd[1]: Listening on dbus.socket. Nov 1 00:38:25.076501 systemd[1]: Starting docker.socket... Nov 1 00:38:25.080966 systemd[1]: Listening on sshd.socket. Nov 1 00:38:25.081842 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:25.082647 systemd[1]: Listening on docker.socket. Nov 1 00:38:25.083449 systemd[1]: Reached target sockets.target. Nov 1 00:38:25.084118 systemd[1]: Reached target basic.target. Nov 1 00:38:25.084832 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:38:25.084859 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:38:25.086311 systemd[1]: Starting containerd.service... Nov 1 00:38:25.089712 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:38:25.091908 systemd[1]: Starting dbus.service... Nov 1 00:38:25.095422 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:38:25.103430 jq[1167]: false Nov 1 00:38:25.103858 systemd[1]: Starting extend-filesystems.service... Nov 1 00:38:25.104962 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:38:25.108380 systemd[1]: Starting kubelet.service... Nov 1 00:38:25.112528 systemd[1]: Starting motdgen.service... Nov 1 00:38:25.116721 systemd[1]: Starting prepare-helm.service... Nov 1 00:38:25.123868 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:38:25.126755 systemd[1]: Starting sshd-keygen.service... Nov 1 00:38:25.133338 extend-filesystems[1168]: Found loop1 Nov 1 00:38:25.134852 systemd[1]: Starting systemd-logind.service... Nov 1 00:38:25.135756 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:38:25.135876 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:38:25.136465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:38:25.139843 systemd[1]: Starting update-engine.service... Nov 1 00:38:25.142783 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:38:25.145834 extend-filesystems[1168]: Found vda Nov 1 00:38:25.146712 extend-filesystems[1168]: Found vda1 Nov 1 00:38:25.147553 extend-filesystems[1168]: Found vda2 Nov 1 00:38:25.148376 extend-filesystems[1168]: Found vda3 Nov 1 00:38:25.149171 extend-filesystems[1168]: Found usr Nov 1 00:38:25.150325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:38:25.150699 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:38:25.151189 extend-filesystems[1168]: Found vda4 Nov 1 00:38:25.154534 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:38:25.155375 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:38:25.170242 extend-filesystems[1168]: Found vda6 Nov 1 00:38:25.189121 jq[1179]: true Nov 1 00:38:25.189261 extend-filesystems[1168]: Found vda7 Nov 1 00:38:25.189261 extend-filesystems[1168]: Found vda9 Nov 1 00:38:25.189261 extend-filesystems[1168]: Checking size of /dev/vda9 Nov 1 00:38:25.229894 jq[1187]: true Nov 1 00:38:25.230177 tar[1182]: linux-amd64/LICENSE Nov 1 00:38:25.230177 tar[1182]: linux-amd64/helm Nov 1 00:38:25.206151 dbus-daemon[1164]: [system] SELinux support is enabled Nov 1 00:38:25.206406 systemd[1]: Started dbus.service. Nov 1 00:38:25.213859 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:38:25.213899 systemd[1]: Reached target system-config.target. Nov 1 00:38:25.214742 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:38:25.214763 systemd[1]: Reached target user-config.target. Nov 1 00:38:25.220785 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:25.220820 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:38:25.264926 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:38:25.265214 systemd[1]: Finished motdgen.service. Nov 1 00:38:25.265750 extend-filesystems[1168]: Resized partition /dev/vda9 Nov 1 00:38:25.298000 extend-filesystems[1209]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:38:25.307669 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:38:25.330910 update_engine[1178]: I1101 00:38:25.330021 1178 main.cc:92] Flatcar Update Engine starting Nov 1 00:38:25.335903 update_engine[1178]: I1101 00:38:25.335866 1178 update_check_scheduler.cc:74] Next update check in 2m39s Nov 1 00:38:25.335948 systemd[1]: Started update-engine.service. Nov 1 00:38:25.346950 systemd[1]: Started locksmithd.service. Nov 1 00:38:25.392647 bash[1219]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:38:25.393176 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:38:25.441713 env[1183]: time="2025-11-01T00:38:25.441429365Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:38:25.474842 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:38:25.503921 coreos-metadata[1163]: Nov 01 00:38:25.474 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:38:25.503921 coreos-metadata[1163]: Nov 01 00:38:25.489 INFO Fetch successful Nov 1 00:38:25.508687 systemd-logind[1177]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:38:25.509282 extend-filesystems[1209]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:38:25.509282 extend-filesystems[1209]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:38:25.509282 extend-filesystems[1209]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:38:25.508713 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:38:25.532757 extend-filesystems[1168]: Resized filesystem in /dev/vda9 Nov 1 00:38:25.532757 extend-filesystems[1168]: Found vdb Nov 1 00:38:25.508716 systemd-logind[1177]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:38:25.509010 systemd[1]: Finished extend-filesystems.service. Nov 1 00:38:25.510780 systemd-logind[1177]: New seat seat0. Nov 1 00:38:25.515514 unknown[1163]: wrote ssh authorized keys file for user: core Nov 1 00:38:25.528987 systemd[1]: Started systemd-logind.service. Nov 1 00:38:25.567523 update-ssh-keys[1224]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:38:25.567823 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:38:25.604712 env[1183]: time="2025-11-01T00:38:25.604644978Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:38:25.606035 env[1183]: time="2025-11-01T00:38:25.605986977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.609295 env[1183]: time="2025-11-01T00:38:25.609214397Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:38:25.609693 env[1183]: time="2025-11-01T00:38:25.609657933Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.610191 env[1183]: time="2025-11-01T00:38:25.610144569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:38:25.612701 env[1183]: time="2025-11-01T00:38:25.612655301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.612891 env[1183]: time="2025-11-01T00:38:25.612865048Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:38:25.612985 env[1183]: time="2025-11-01T00:38:25.612963767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.613244 env[1183]: time="2025-11-01T00:38:25.613217265Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.615693 env[1183]: time="2025-11-01T00:38:25.615655835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:38:25.616028 env[1183]: time="2025-11-01T00:38:25.615999603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:38:25.617686 env[1183]: time="2025-11-01T00:38:25.617652170Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:38:25.617901 env[1183]: time="2025-11-01T00:38:25.617877768Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:38:25.617977 env[1183]: time="2025-11-01T00:38:25.617960785Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:38:25.624895 env[1183]: time="2025-11-01T00:38:25.624838757Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:38:25.625125 env[1183]: time="2025-11-01T00:38:25.625101332Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:38:25.625223 env[1183]: time="2025-11-01T00:38:25.625206273Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:38:25.625343 env[1183]: time="2025-11-01T00:38:25.625326188Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625445 env[1183]: time="2025-11-01T00:38:25.625428892Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625539 env[1183]: time="2025-11-01T00:38:25.625522499Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625651 env[1183]: time="2025-11-01T00:38:25.625635203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625759 env[1183]: time="2025-11-01T00:38:25.625742363Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625857 env[1183]: time="2025-11-01T00:38:25.625840204Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.625953 env[1183]: time="2025-11-01T00:38:25.625936103Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.626046 env[1183]: time="2025-11-01T00:38:25.626029149Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.626141 env[1183]: time="2025-11-01T00:38:25.626124560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:38:25.626474 env[1183]: time="2025-11-01T00:38:25.626432844Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:38:25.626772 env[1183]: time="2025-11-01T00:38:25.626753083Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:38:25.627372 env[1183]: time="2025-11-01T00:38:25.627348884Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:38:25.627498 env[1183]: time="2025-11-01T00:38:25.627479236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.627589 env[1183]: time="2025-11-01T00:38:25.627571702Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:38:25.627764 env[1183]: time="2025-11-01T00:38:25.627729800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.627858 env[1183]: time="2025-11-01T00:38:25.627842537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.627998 env[1183]: time="2025-11-01T00:38:25.627969189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628093 env[1183]: time="2025-11-01T00:38:25.628076754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628186 env[1183]: time="2025-11-01T00:38:25.628170144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628279 env[1183]: time="2025-11-01T00:38:25.628262871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628369 env[1183]: time="2025-11-01T00:38:25.628352952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628471 env[1183]: time="2025-11-01T00:38:25.628450378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628592 env[1183]: time="2025-11-01T00:38:25.628574069Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:38:25.628881 env[1183]: time="2025-11-01T00:38:25.628862973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.628985 env[1183]: time="2025-11-01T00:38:25.628968774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.629077 env[1183]: time="2025-11-01T00:38:25.629061872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.629169 env[1183]: time="2025-11-01T00:38:25.629152721Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:38:25.629281 env[1183]: time="2025-11-01T00:38:25.629260632Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:38:25.629369 env[1183]: time="2025-11-01T00:38:25.629352732Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:38:25.629471 env[1183]: time="2025-11-01T00:38:25.629453841Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:38:25.629596 env[1183]: time="2025-11-01T00:38:25.629579481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:38:25.630015 env[1183]: time="2025-11-01T00:38:25.629945998Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:38:25.632371 env[1183]: time="2025-11-01T00:38:25.630217856Z" level=info msg="Connect containerd service" Nov 1 00:38:25.632371 env[1183]: time="2025-11-01T00:38:25.630277607Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:38:25.637389 env[1183]: time="2025-11-01T00:38:25.636429379Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:38:25.637389 env[1183]: time="2025-11-01T00:38:25.636800430Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:38:25.637389 env[1183]: time="2025-11-01T00:38:25.636869877Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:38:25.637074 systemd[1]: Started containerd.service. Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638090628Z" level=info msg="Start subscribing containerd event" Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638151703Z" level=info msg="Start recovering state" Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638231247Z" level=info msg="Start event monitor" Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638263792Z" level=info msg="Start snapshots syncer" Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638274034Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:38:25.638316 env[1183]: time="2025-11-01T00:38:25.638283468Z" level=info msg="Start streaming server" Nov 1 00:38:25.653076 env[1183]: time="2025-11-01T00:38:25.652973555Z" level=info msg="containerd successfully booted in 0.214871s" Nov 1 00:38:25.703939 systemd-networkd[997]: eth1: Gained IPv6LL Nov 1 00:38:26.583668 locksmithd[1220]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:38:26.822078 tar[1182]: linux-amd64/README.md Nov 1 00:38:26.829974 systemd[1]: Finished prepare-helm.service. Nov 1 00:38:27.042249 systemd[1]: Started kubelet.service. Nov 1 00:38:27.596278 systemd[1]: Created slice system-sshd.slice. Nov 1 00:38:27.657838 sshd_keygen[1191]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:38:27.694379 systemd[1]: Finished sshd-keygen.service. Nov 1 00:38:27.697377 systemd[1]: Starting issuegen.service... Nov 1 00:38:27.700317 systemd[1]: Started sshd@0-146.190.139.75:22-139.178.89.65:52636.service. Nov 1 00:38:27.713873 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:38:27.714052 systemd[1]: Finished issuegen.service. Nov 1 00:38:27.716551 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:38:27.728713 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:38:27.732263 systemd[1]: Started getty@tty1.service. Nov 1 00:38:27.735779 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:38:27.736879 systemd[1]: Reached target getty.target. Nov 1 00:38:27.738115 systemd[1]: Reached target multi-user.target. Nov 1 00:38:27.743771 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:38:27.758044 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:38:27.758342 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:38:27.759556 systemd[1]: Startup finished in 1.137s (kernel) + 5.587s (initrd) + 8.656s (userspace) = 15.380s. Nov 1 00:38:27.845773 sshd[1252]: Accepted publickey for core from 139.178.89.65 port 52636 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:38:27.848809 sshd[1252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:27.855384 kubelet[1237]: E1101 00:38:27.855172 1237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:38:27.862513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:38:27.862682 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:38:27.862919 systemd[1]: kubelet.service: Consumed 1.602s CPU time. Nov 1 00:38:27.864082 systemd[1]: Created slice user-500.slice. Nov 1 00:38:27.866221 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:38:27.872823 systemd-logind[1177]: New session 1 of user core. Nov 1 00:38:27.880612 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:38:27.883208 systemd[1]: Starting user@500.service... Nov 1 00:38:27.888255 (systemd)[1262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:28.008434 systemd[1262]: Queued start job for default target default.target. Nov 1 00:38:28.009306 systemd[1262]: Reached target paths.target. Nov 1 00:38:28.009340 systemd[1262]: Reached target sockets.target. Nov 1 00:38:28.009360 systemd[1262]: Reached target timers.target. Nov 1 00:38:28.009378 systemd[1262]: Reached target basic.target. Nov 1 00:38:28.009444 systemd[1262]: Reached target default.target. Nov 1 00:38:28.009492 systemd[1262]: Startup finished in 112ms. Nov 1 00:38:28.010385 systemd[1]: Started user@500.service. Nov 1 00:38:28.012960 systemd[1]: Started session-1.scope. Nov 1 00:38:28.086814 systemd[1]: Started sshd@1-146.190.139.75:22-139.178.89.65:52652.service. Nov 1 00:38:28.137365 sshd[1271]: Accepted publickey for core from 139.178.89.65 port 52652 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:38:28.140155 sshd[1271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:28.148248 systemd[1]: Started session-2.scope. Nov 1 00:38:28.149696 systemd-logind[1177]: New session 2 of user core. Nov 1 00:38:28.219232 sshd[1271]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:28.224440 systemd[1]: sshd@1-146.190.139.75:22-139.178.89.65:52652.service: Deactivated successfully. Nov 1 00:38:28.225307 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:38:28.226237 systemd-logind[1177]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:38:28.227837 systemd[1]: Started sshd@2-146.190.139.75:22-139.178.89.65:52664.service. Nov 1 00:38:28.229062 systemd-logind[1177]: Removed session 2. Nov 1 00:38:28.285166 sshd[1277]: Accepted publickey for core from 139.178.89.65 port 52664 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:38:28.287205 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:28.293076 systemd[1]: Started session-3.scope. Nov 1 00:38:28.293877 systemd-logind[1177]: New session 3 of user core. Nov 1 00:38:28.354119 sshd[1277]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:28.358714 systemd[1]: sshd@2-146.190.139.75:22-139.178.89.65:52664.service: Deactivated successfully. Nov 1 00:38:28.359413 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:38:28.360137 systemd-logind[1177]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:38:28.361483 systemd[1]: Started sshd@3-146.190.139.75:22-139.178.89.65:52674.service. Nov 1 00:38:28.363061 systemd-logind[1177]: Removed session 3. Nov 1 00:38:28.419686 sshd[1283]: Accepted publickey for core from 139.178.89.65 port 52674 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:38:28.421714 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:28.427734 systemd[1]: Started session-4.scope. Nov 1 00:38:28.428154 systemd-logind[1177]: New session 4 of user core. Nov 1 00:38:28.494475 sshd[1283]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:28.500249 systemd[1]: sshd@3-146.190.139.75:22-139.178.89.65:52674.service: Deactivated successfully. Nov 1 00:38:28.501140 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:38:28.502048 systemd-logind[1177]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:38:28.503571 systemd[1]: Started sshd@4-146.190.139.75:22-139.178.89.65:52686.service. Nov 1 00:38:28.505939 systemd-logind[1177]: Removed session 4. Nov 1 00:38:28.553969 sshd[1289]: Accepted publickey for core from 139.178.89.65 port 52686 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:38:28.556155 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:38:28.562100 systemd-logind[1177]: New session 5 of user core. Nov 1 00:38:28.562866 systemd[1]: Started session-5.scope. Nov 1 00:38:28.633776 sudo[1292]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:38:28.634429 sudo[1292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:38:28.670312 systemd[1]: Starting docker.service... Nov 1 00:38:28.731464 env[1302]: time="2025-11-01T00:38:28.731378432Z" level=info msg="Starting up" Nov 1 00:38:28.734083 env[1302]: time="2025-11-01T00:38:28.733834490Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:38:28.734083 env[1302]: time="2025-11-01T00:38:28.733864473Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:38:28.734083 env[1302]: time="2025-11-01T00:38:28.733884522Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:38:28.734083 env[1302]: time="2025-11-01T00:38:28.733896500Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:38:28.736264 env[1302]: time="2025-11-01T00:38:28.736219511Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:38:28.736264 env[1302]: time="2025-11-01T00:38:28.736242466Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:38:28.736264 env[1302]: time="2025-11-01T00:38:28.736257677Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:38:28.736264 env[1302]: time="2025-11-01T00:38:28.736267033Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:38:28.829644 env[1302]: time="2025-11-01T00:38:28.829576233Z" level=info msg="Loading containers: start." Nov 1 00:38:28.998688 kernel: Initializing XFRM netlink socket Nov 1 00:38:29.046075 env[1302]: time="2025-11-01T00:38:29.045991722Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:38:29.139528 systemd-networkd[997]: docker0: Link UP Nov 1 00:38:29.157578 env[1302]: time="2025-11-01T00:38:29.157532439Z" level=info msg="Loading containers: done." Nov 1 00:38:29.177776 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1551166899-merged.mount: Deactivated successfully. Nov 1 00:38:29.180359 env[1302]: time="2025-11-01T00:38:29.180312143Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:38:29.180811 env[1302]: time="2025-11-01T00:38:29.180780295Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:38:29.181047 env[1302]: time="2025-11-01T00:38:29.181028721Z" level=info msg="Daemon has completed initialization" Nov 1 00:38:29.201803 systemd[1]: Started docker.service. Nov 1 00:38:29.206737 env[1302]: time="2025-11-01T00:38:29.206655671Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:38:29.229292 systemd[1]: Starting coreos-metadata.service... Nov 1 00:38:29.280925 coreos-metadata[1421]: Nov 01 00:38:29.280 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:38:29.293111 coreos-metadata[1421]: Nov 01 00:38:29.292 INFO Fetch successful Nov 1 00:38:29.307826 systemd[1]: Finished coreos-metadata.service. Nov 1 00:38:30.232675 env[1183]: time="2025-11-01T00:38:30.232585410Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:38:30.732875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289776303.mount: Deactivated successfully. Nov 1 00:38:32.465340 env[1183]: time="2025-11-01T00:38:32.465254304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:32.467586 env[1183]: time="2025-11-01T00:38:32.467507560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:32.469324 env[1183]: time="2025-11-01T00:38:32.469280949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:32.471853 env[1183]: time="2025-11-01T00:38:32.471813913Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:32.472683 env[1183]: time="2025-11-01T00:38:32.472606581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:38:32.473381 env[1183]: time="2025-11-01T00:38:32.473348635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:38:34.238120 env[1183]: time="2025-11-01T00:38:34.238025271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:34.240296 env[1183]: time="2025-11-01T00:38:34.240238552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:34.242011 env[1183]: time="2025-11-01T00:38:34.241977318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:34.243655 env[1183]: time="2025-11-01T00:38:34.243578967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:34.244668 env[1183]: time="2025-11-01T00:38:34.244604238Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:38:34.245263 env[1183]: time="2025-11-01T00:38:34.245232454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:38:36.043518 env[1183]: time="2025-11-01T00:38:36.043438313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:36.045668 env[1183]: time="2025-11-01T00:38:36.045598957Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:36.047822 env[1183]: time="2025-11-01T00:38:36.047775640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:36.050430 env[1183]: time="2025-11-01T00:38:36.050379245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:36.051450 env[1183]: time="2025-11-01T00:38:36.051403479Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:38:36.052260 env[1183]: time="2025-11-01T00:38:36.052226218Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:38:37.295218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655694847.mount: Deactivated successfully. Nov 1 00:38:38.113950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:38:38.114172 systemd[1]: Stopped kubelet.service. Nov 1 00:38:38.114225 systemd[1]: kubelet.service: Consumed 1.602s CPU time. Nov 1 00:38:38.116094 systemd[1]: Starting kubelet.service... Nov 1 00:38:38.232654 env[1183]: time="2025-11-01T00:38:38.232554019Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:38.238951 env[1183]: time="2025-11-01T00:38:38.238884625Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:38.246391 env[1183]: time="2025-11-01T00:38:38.246321735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:38.248860 env[1183]: time="2025-11-01T00:38:38.248788783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:38.249638 env[1183]: time="2025-11-01T00:38:38.249546622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:38:38.251516 env[1183]: time="2025-11-01T00:38:38.251271115Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:38:38.294844 systemd[1]: Started kubelet.service. Nov 1 00:38:38.371607 kubelet[1443]: E1101 00:38:38.370927 1443 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:38:38.376773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:38:38.376977 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:38:38.834985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341914939.mount: Deactivated successfully. Nov 1 00:38:40.030836 env[1183]: time="2025-11-01T00:38:40.030773779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.033774 env[1183]: time="2025-11-01T00:38:40.033707877Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.036722 env[1183]: time="2025-11-01T00:38:40.036661219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.039119 env[1183]: time="2025-11-01T00:38:40.039068706Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.040220 env[1183]: time="2025-11-01T00:38:40.040178336Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:38:40.040792 env[1183]: time="2025-11-01T00:38:40.040758927Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:38:40.588941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3995274434.mount: Deactivated successfully. Nov 1 00:38:40.596073 env[1183]: time="2025-11-01T00:38:40.596008505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.597281 env[1183]: time="2025-11-01T00:38:40.597239086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.599105 env[1183]: time="2025-11-01T00:38:40.599046871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.601126 env[1183]: time="2025-11-01T00:38:40.601068157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:40.601716 env[1183]: time="2025-11-01T00:38:40.601681554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:38:40.602812 env[1183]: time="2025-11-01T00:38:40.602780928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:38:41.185567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425504575.mount: Deactivated successfully. Nov 1 00:38:44.250406 env[1183]: time="2025-11-01T00:38:44.250324134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:44.252269 env[1183]: time="2025-11-01T00:38:44.252225983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:44.258357 env[1183]: time="2025-11-01T00:38:44.258274418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:44.260195 env[1183]: time="2025-11-01T00:38:44.260138073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:44.261291 env[1183]: time="2025-11-01T00:38:44.261247724Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:38:47.549111 systemd[1]: Stopped kubelet.service. Nov 1 00:38:47.551940 systemd[1]: Starting kubelet.service... Nov 1 00:38:47.597122 systemd[1]: Reloading. Nov 1 00:38:47.728017 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-11-01T00:38:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:38:47.734208 /usr/lib/systemd/system-generators/torcx-generator[1496]: time="2025-11-01T00:38:47Z" level=info msg="torcx already run" Nov 1 00:38:47.854238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:38:47.854546 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:38:47.875207 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:38:47.981093 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:38:47.981366 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:38:47.981709 systemd[1]: Stopped kubelet.service. Nov 1 00:38:47.983877 systemd[1]: Starting kubelet.service... Nov 1 00:38:48.121448 systemd[1]: Started kubelet.service. Nov 1 00:38:48.177410 kubelet[1546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:38:48.177410 kubelet[1546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:38:48.177410 kubelet[1546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:38:48.177909 kubelet[1546]: I1101 00:38:48.177472 1546 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:38:48.686548 kubelet[1546]: I1101 00:38:48.686480 1546 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:38:48.686548 kubelet[1546]: I1101 00:38:48.686521 1546 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:38:48.686903 kubelet[1546]: I1101 00:38:48.686878 1546 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:38:48.724913 kubelet[1546]: E1101 00:38:48.724867 1546 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.139.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:48.732273 kubelet[1546]: I1101 00:38:48.732219 1546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:38:48.741971 kubelet[1546]: E1101 00:38:48.741909 1546 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:38:48.741971 kubelet[1546]: I1101 00:38:48.741976 1546 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:38:48.745763 kubelet[1546]: I1101 00:38:48.745709 1546 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:38:48.746154 kubelet[1546]: I1101 00:38:48.746075 1546 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:38:48.746397 kubelet[1546]: I1101 00:38:48.746154 1546 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-368ce9a156","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:38:48.746523 kubelet[1546]: I1101 00:38:48.746411 1546 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:38:48.746523 kubelet[1546]: I1101 00:38:48.746424 1546 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:38:48.746582 kubelet[1546]: I1101 00:38:48.746562 1546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:38:48.750328 kubelet[1546]: I1101 00:38:48.750285 1546 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:38:48.750576 kubelet[1546]: I1101 00:38:48.750537 1546 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:38:48.750576 kubelet[1546]: I1101 00:38:48.750568 1546 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:38:48.750731 kubelet[1546]: I1101 00:38:48.750581 1546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:38:48.761067 kubelet[1546]: W1101 00:38:48.760979 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.139.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-368ce9a156&limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:48.761351 kubelet[1546]: E1101 00:38:48.761320 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.139.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-368ce9a156&limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:48.762538 kubelet[1546]: I1101 00:38:48.762501 1546 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:38:48.763027 kubelet[1546]: I1101 00:38:48.763002 1546 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:38:48.763820 kubelet[1546]: W1101 00:38:48.763790 1546 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:38:48.768280 kubelet[1546]: I1101 00:38:48.768238 1546 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:38:48.768407 kubelet[1546]: I1101 00:38:48.768293 1546 server.go:1287] "Started kubelet" Nov 1 00:38:48.769064 kubelet[1546]: W1101 00:38:48.768452 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.139.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:48.769064 kubelet[1546]: E1101 00:38:48.768522 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.139.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:48.781590 kubelet[1546]: E1101 00:38:48.778297 1546 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://146.190.139.75:6443/api/v1/namespaces/default/events\": dial tcp 146.190.139.75:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-368ce9a156.1873bb09838ec533 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-368ce9a156,UID:ci-3510.3.8-n-368ce9a156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-368ce9a156,},FirstTimestamp:2025-11-01 00:38:48.768267571 +0000 UTC m=+0.640158567,LastTimestamp:2025-11-01 00:38:48.768267571 +0000 UTC m=+0.640158567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-368ce9a156,}" Nov 1 00:38:48.789502 kubelet[1546]: I1101 00:38:48.789371 1546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:38:48.790148 kubelet[1546]: I1101 00:38:48.790125 1546 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:38:48.791727 kubelet[1546]: E1101 00:38:48.791695 1546 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:38:48.791834 kubelet[1546]: I1101 00:38:48.791802 1546 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:38:48.794915 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:38:48.795108 kubelet[1546]: I1101 00:38:48.795085 1546 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:38:48.795285 kubelet[1546]: I1101 00:38:48.795268 1546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:38:48.803280 kubelet[1546]: I1101 00:38:48.795871 1546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:38:48.803541 kubelet[1546]: I1101 00:38:48.803398 1546 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:38:48.803816 kubelet[1546]: I1101 00:38:48.803793 1546 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:38:48.803881 kubelet[1546]: I1101 00:38:48.803864 1546 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:38:48.805344 kubelet[1546]: W1101 00:38:48.805287 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.139.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:48.805444 kubelet[1546]: E1101 00:38:48.805371 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.139.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:48.805653 kubelet[1546]: I1101 00:38:48.805608 1546 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:38:48.805797 kubelet[1546]: I1101 00:38:48.805762 1546 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:38:48.808306 kubelet[1546]: E1101 00:38:48.808254 1546 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-368ce9a156\" not found" Nov 1 00:38:48.809389 kubelet[1546]: E1101 00:38:48.809350 1546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.139.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-368ce9a156?timeout=10s\": dial tcp 146.190.139.75:6443: connect: connection refused" interval="200ms" Nov 1 00:38:48.809636 kubelet[1546]: I1101 00:38:48.809608 1546 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:38:48.827673 kubelet[1546]: I1101 00:38:48.825609 1546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:38:48.827673 kubelet[1546]: I1101 00:38:48.826860 1546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:38:48.827673 kubelet[1546]: I1101 00:38:48.826884 1546 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:38:48.827673 kubelet[1546]: I1101 00:38:48.826916 1546 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:38:48.827673 kubelet[1546]: I1101 00:38:48.826925 1546 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:38:48.827673 kubelet[1546]: E1101 00:38:48.826983 1546 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:38:48.832748 kubelet[1546]: W1101 00:38:48.832692 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.139.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:48.832949 kubelet[1546]: E1101 00:38:48.832926 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.139.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:48.836497 kubelet[1546]: I1101 00:38:48.836454 1546 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:38:48.836497 kubelet[1546]: I1101 00:38:48.836477 1546 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:38:48.836497 kubelet[1546]: I1101 00:38:48.836498 1546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:38:48.840832 kubelet[1546]: I1101 00:38:48.840791 1546 policy_none.go:49] "None policy: Start" Nov 1 00:38:48.840832 kubelet[1546]: I1101 00:38:48.840826 1546 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:38:48.840832 kubelet[1546]: I1101 00:38:48.840840 1546 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:38:48.846939 systemd[1]: Created slice kubepods.slice. Nov 1 00:38:48.851531 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 00:38:48.854530 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 00:38:48.861845 kubelet[1546]: I1101 00:38:48.861810 1546 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:38:48.863426 kubelet[1546]: I1101 00:38:48.863394 1546 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:38:48.863600 kubelet[1546]: I1101 00:38:48.863557 1546 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:38:48.864290 kubelet[1546]: I1101 00:38:48.864272 1546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:38:48.865806 kubelet[1546]: E1101 00:38:48.865754 1546 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:38:48.866006 kubelet[1546]: E1101 00:38:48.865987 1546 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-368ce9a156\" not found" Nov 1 00:38:48.936396 systemd[1]: Created slice kubepods-burstable-pod8a6d131c578340116dc6222a2144d19d.slice. Nov 1 00:38:48.946363 kubelet[1546]: E1101 00:38:48.943812 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:48.947492 systemd[1]: Created slice kubepods-burstable-podfc1188887baeed8969b5f5a5236c9082.slice. Nov 1 00:38:48.949532 kubelet[1546]: E1101 00:38:48.949505 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:48.957588 systemd[1]: Created slice kubepods-burstable-pod0270a638842732e8d45e7783cebfee48.slice. Nov 1 00:38:48.959788 kubelet[1546]: E1101 00:38:48.959754 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:48.965296 kubelet[1546]: I1101 00:38:48.965241 1546 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:48.966092 kubelet[1546]: E1101 00:38:48.966045 1546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.139.75:6443/api/v1/nodes\": dial tcp 146.190.139.75:6443: connect: connection refused" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.010180 kubelet[1546]: E1101 00:38:49.010128 1546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.139.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-368ce9a156?timeout=10s\": dial tcp 146.190.139.75:6443: connect: connection refused" interval="400ms" Nov 1 00:38:49.104997 kubelet[1546]: I1101 00:38:49.104935 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105227 kubelet[1546]: I1101 00:38:49.105207 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105371 kubelet[1546]: I1101 00:38:49.105353 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0270a638842732e8d45e7783cebfee48-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-368ce9a156\" (UID: \"0270a638842732e8d45e7783cebfee48\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105452 kubelet[1546]: I1101 00:38:49.105435 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105581 kubelet[1546]: I1101 00:38:49.105546 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105712 kubelet[1546]: I1101 00:38:49.105695 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105799 kubelet[1546]: I1101 00:38:49.105784 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.105948 kubelet[1546]: I1101 00:38:49.105895 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.106007 kubelet[1546]: I1101 00:38:49.105962 1546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.168013 kubelet[1546]: I1101 00:38:49.167944 1546 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.168593 kubelet[1546]: E1101 00:38:49.168552 1546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.139.75:6443/api/v1/nodes\": dial tcp 146.190.139.75:6443: connect: connection refused" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.248608 kubelet[1546]: E1101 00:38:49.247608 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:49.250030 env[1183]: time="2025-11-01T00:38:49.249971774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-368ce9a156,Uid:8a6d131c578340116dc6222a2144d19d,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:49.251066 kubelet[1546]: E1101 00:38:49.251035 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:49.252207 env[1183]: time="2025-11-01T00:38:49.251834948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-368ce9a156,Uid:fc1188887baeed8969b5f5a5236c9082,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:49.261254 kubelet[1546]: E1101 00:38:49.261193 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:49.262045 env[1183]: time="2025-11-01T00:38:49.261967182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-368ce9a156,Uid:0270a638842732e8d45e7783cebfee48,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:49.411377 kubelet[1546]: E1101 00:38:49.411250 1546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.139.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-368ce9a156?timeout=10s\": dial tcp 146.190.139.75:6443: connect: connection refused" interval="800ms" Nov 1 00:38:49.571290 kubelet[1546]: I1101 00:38:49.570093 1546 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.571290 kubelet[1546]: E1101 00:38:49.570528 1546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.139.75:6443/api/v1/nodes\": dial tcp 146.190.139.75:6443: connect: connection refused" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:49.682887 kubelet[1546]: W1101 00:38:49.682836 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://146.190.139.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:49.683112 kubelet[1546]: E1101 00:38:49.682890 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://146.190.139.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:49.747302 kubelet[1546]: W1101 00:38:49.747201 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://146.190.139.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-368ce9a156&limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:49.747302 kubelet[1546]: E1101 00:38:49.747274 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://146.190.139.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-368ce9a156&limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:49.830893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3722157675.mount: Deactivated successfully. Nov 1 00:38:49.841349 env[1183]: time="2025-11-01T00:38:49.841291687Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.844413 env[1183]: time="2025-11-01T00:38:49.844363324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.846312 env[1183]: time="2025-11-01T00:38:49.846250876Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.847299 env[1183]: time="2025-11-01T00:38:49.847266591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.849069 env[1183]: time="2025-11-01T00:38:49.849028089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.850717 env[1183]: time="2025-11-01T00:38:49.850683120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.854816 env[1183]: time="2025-11-01T00:38:49.854776436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.857822 env[1183]: time="2025-11-01T00:38:49.857785697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.858500 env[1183]: time="2025-11-01T00:38:49.858477179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.859233 env[1183]: time="2025-11-01T00:38:49.859206453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.860066 env[1183]: time="2025-11-01T00:38:49.860039849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.861155 env[1183]: time="2025-11-01T00:38:49.861126743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:38:49.905221 env[1183]: time="2025-11-01T00:38:49.903539819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:49.905221 env[1183]: time="2025-11-01T00:38:49.903592191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:49.905221 env[1183]: time="2025-11-01T00:38:49.903604479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:49.905221 env[1183]: time="2025-11-01T00:38:49.903729941Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/85e64cd7c23aa8c93f1139ec95a3476ef6d33ae8c6a883b24e943bc3ec48279c pid=1595 runtime=io.containerd.runc.v2 Nov 1 00:38:49.909173 env[1183]: time="2025-11-01T00:38:49.907269977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:49.909173 env[1183]: time="2025-11-01T00:38:49.907436751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:49.909173 env[1183]: time="2025-11-01T00:38:49.907476904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:49.909173 env[1183]: time="2025-11-01T00:38:49.907794684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9000cede242746323fc2df272ac3f46db0a80602903ed02a9447adb5aa50d9d2 pid=1606 runtime=io.containerd.runc.v2 Nov 1 00:38:49.913606 env[1183]: time="2025-11-01T00:38:49.913316579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:38:49.913606 env[1183]: time="2025-11-01T00:38:49.913374537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:38:49.913606 env[1183]: time="2025-11-01T00:38:49.913391274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:38:49.913915 env[1183]: time="2025-11-01T00:38:49.913746353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e47c795b295a39b67fe329af0a769aeaf2fe32e813539515f0e4bb999d914126 pid=1598 runtime=io.containerd.runc.v2 Nov 1 00:38:49.926997 systemd[1]: Started cri-containerd-85e64cd7c23aa8c93f1139ec95a3476ef6d33ae8c6a883b24e943bc3ec48279c.scope. Nov 1 00:38:49.966454 systemd[1]: Started cri-containerd-9000cede242746323fc2df272ac3f46db0a80602903ed02a9447adb5aa50d9d2.scope. Nov 1 00:38:49.979650 systemd[1]: Started cri-containerd-e47c795b295a39b67fe329af0a769aeaf2fe32e813539515f0e4bb999d914126.scope. Nov 1 00:38:50.005386 env[1183]: time="2025-11-01T00:38:50.005334104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-368ce9a156,Uid:0270a638842732e8d45e7783cebfee48,Namespace:kube-system,Attempt:0,} returns sandbox id \"85e64cd7c23aa8c93f1139ec95a3476ef6d33ae8c6a883b24e943bc3ec48279c\"" Nov 1 00:38:50.007459 kubelet[1546]: E1101 00:38:50.007142 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.011257 env[1183]: time="2025-11-01T00:38:50.011202950Z" level=info msg="CreateContainer within sandbox \"85e64cd7c23aa8c93f1139ec95a3476ef6d33ae8c6a883b24e943bc3ec48279c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:38:50.033379 env[1183]: time="2025-11-01T00:38:50.033300362Z" level=info msg="CreateContainer within sandbox \"85e64cd7c23aa8c93f1139ec95a3476ef6d33ae8c6a883b24e943bc3ec48279c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1efa079e6bff929a7df1e8ddcbb9d2a83d25188a8a2cb03e029b4285b4830c6b\"" Nov 1 00:38:50.034903 env[1183]: time="2025-11-01T00:38:50.034840469Z" level=info msg="StartContainer for \"1efa079e6bff929a7df1e8ddcbb9d2a83d25188a8a2cb03e029b4285b4830c6b\"" Nov 1 00:38:50.065397 systemd[1]: Started cri-containerd-1efa079e6bff929a7df1e8ddcbb9d2a83d25188a8a2cb03e029b4285b4830c6b.scope. Nov 1 00:38:50.070656 env[1183]: time="2025-11-01T00:38:50.070480078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-368ce9a156,Uid:8a6d131c578340116dc6222a2144d19d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e47c795b295a39b67fe329af0a769aeaf2fe32e813539515f0e4bb999d914126\"" Nov 1 00:38:50.074012 kubelet[1546]: E1101 00:38:50.073953 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.083820 env[1183]: time="2025-11-01T00:38:50.082041392Z" level=info msg="CreateContainer within sandbox \"e47c795b295a39b67fe329af0a769aeaf2fe32e813539515f0e4bb999d914126\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:38:50.123219 env[1183]: time="2025-11-01T00:38:50.123120814Z" level=info msg="CreateContainer within sandbox \"e47c795b295a39b67fe329af0a769aeaf2fe32e813539515f0e4bb999d914126\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"89ddb37f007c10b63aeb94bde8fbc8286fe02cb5bafda5c2f27e00c908d44df3\"" Nov 1 00:38:50.123500 env[1183]: time="2025-11-01T00:38:50.123415839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-368ce9a156,Uid:fc1188887baeed8969b5f5a5236c9082,Namespace:kube-system,Attempt:0,} returns sandbox id \"9000cede242746323fc2df272ac3f46db0a80602903ed02a9447adb5aa50d9d2\"" Nov 1 00:38:50.126039 env[1183]: time="2025-11-01T00:38:50.125991206Z" level=info msg="StartContainer for \"89ddb37f007c10b63aeb94bde8fbc8286fe02cb5bafda5c2f27e00c908d44df3\"" Nov 1 00:38:50.132258 kubelet[1546]: E1101 00:38:50.131960 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.136592 env[1183]: time="2025-11-01T00:38:50.136530612Z" level=info msg="CreateContainer within sandbox \"9000cede242746323fc2df272ac3f46db0a80602903ed02a9447adb5aa50d9d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:38:50.155543 env[1183]: time="2025-11-01T00:38:50.155470218Z" level=info msg="CreateContainer within sandbox \"9000cede242746323fc2df272ac3f46db0a80602903ed02a9447adb5aa50d9d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4073bf0726039a7ad391dc0e5035fac72ae80a73f4aeb6353125c194013ec622\"" Nov 1 00:38:50.156742 env[1183]: time="2025-11-01T00:38:50.156506525Z" level=info msg="StartContainer for \"4073bf0726039a7ad391dc0e5035fac72ae80a73f4aeb6353125c194013ec622\"" Nov 1 00:38:50.164412 systemd[1]: Started cri-containerd-89ddb37f007c10b63aeb94bde8fbc8286fe02cb5bafda5c2f27e00c908d44df3.scope. Nov 1 00:38:50.208735 env[1183]: time="2025-11-01T00:38:50.208669582Z" level=info msg="StartContainer for \"1efa079e6bff929a7df1e8ddcbb9d2a83d25188a8a2cb03e029b4285b4830c6b\" returns successfully" Nov 1 00:38:50.212228 kubelet[1546]: E1101 00:38:50.212125 1546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://146.190.139.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-368ce9a156?timeout=10s\": dial tcp 146.190.139.75:6443: connect: connection refused" interval="1.6s" Nov 1 00:38:50.231324 systemd[1]: Started cri-containerd-4073bf0726039a7ad391dc0e5035fac72ae80a73f4aeb6353125c194013ec622.scope. Nov 1 00:38:50.261035 env[1183]: time="2025-11-01T00:38:50.260971735Z" level=info msg="StartContainer for \"89ddb37f007c10b63aeb94bde8fbc8286fe02cb5bafda5c2f27e00c908d44df3\" returns successfully" Nov 1 00:38:50.305600 env[1183]: time="2025-11-01T00:38:50.305522513Z" level=info msg="StartContainer for \"4073bf0726039a7ad391dc0e5035fac72ae80a73f4aeb6353125c194013ec622\" returns successfully" Nov 1 00:38:50.329099 kubelet[1546]: W1101 00:38:50.328917 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://146.190.139.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:50.329099 kubelet[1546]: E1101 00:38:50.329030 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://146.190.139.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:50.372890 kubelet[1546]: I1101 00:38:50.372537 1546 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:50.374414 kubelet[1546]: E1101 00:38:50.374340 1546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://146.190.139.75:6443/api/v1/nodes\": dial tcp 146.190.139.75:6443: connect: connection refused" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:50.405904 kubelet[1546]: W1101 00:38:50.405781 1546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://146.190.139.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 146.190.139.75:6443: connect: connection refused Nov 1 00:38:50.406225 kubelet[1546]: E1101 00:38:50.406167 1546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://146.190.139.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:50.843412 kubelet[1546]: E1101 00:38:50.843369 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:50.844063 kubelet[1546]: E1101 00:38:50.844032 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.846999 kubelet[1546]: E1101 00:38:50.846967 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:50.847515 kubelet[1546]: E1101 00:38:50.847492 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.849372 kubelet[1546]: E1101 00:38:50.849339 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:50.849729 kubelet[1546]: E1101 00:38:50.849703 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:50.898886 kubelet[1546]: E1101 00:38:50.898831 1546 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://146.190.139.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 146.190.139.75:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:38:51.850732 kubelet[1546]: E1101 00:38:51.850693 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:51.851641 kubelet[1546]: E1101 00:38:51.851459 1546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:51.851831 kubelet[1546]: E1101 00:38:51.851811 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:51.851955 kubelet[1546]: E1101 00:38:51.851825 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:51.976731 kubelet[1546]: I1101 00:38:51.976697 1546 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.779482 kubelet[1546]: I1101 00:38:52.779438 1546 apiserver.go:52] "Watching apiserver" Nov 1 00:38:52.790452 kubelet[1546]: E1101 00:38:52.790395 1546 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-368ce9a156\" not found" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.804520 kubelet[1546]: I1101 00:38:52.804470 1546 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:38:52.891089 kubelet[1546]: I1101 00:38:52.891031 1546 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.909222 kubelet[1546]: I1101 00:38:52.909172 1546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.917864 kubelet[1546]: E1101 00:38:52.917803 1546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.917864 kubelet[1546]: I1101 00:38:52.917844 1546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.923060 kubelet[1546]: E1101 00:38:52.923003 1546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.923390 kubelet[1546]: I1101 00:38:52.923296 1546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:52.932018 kubelet[1546]: E1101 00:38:52.931965 1546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-368ce9a156\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:55.074252 kubelet[1546]: I1101 00:38:55.074191 1546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:55.091876 systemd[1]: Reloading. Nov 1 00:38:55.101519 kubelet[1546]: W1101 00:38:55.101469 1546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:38:55.101817 kubelet[1546]: E1101 00:38:55.101794 1546 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:55.231954 /usr/lib/systemd/system-generators/torcx-generator[1832]: time="2025-11-01T00:38:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:38:55.231986 /usr/lib/systemd/system-generators/torcx-generator[1832]: time="2025-11-01T00:38:55Z" level=info msg="torcx already run" Nov 1 00:38:55.327974 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:38:55.327995 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:38:55.349015 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:38:55.491768 systemd[1]: Stopping kubelet.service... Nov 1 00:38:55.506380 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:38:55.506714 systemd[1]: Stopped kubelet.service. Nov 1 00:38:55.506822 systemd[1]: kubelet.service: Consumed 1.092s CPU time. Nov 1 00:38:55.510128 systemd[1]: Starting kubelet.service... Nov 1 00:38:56.567856 systemd[1]: Started kubelet.service. Nov 1 00:38:56.680016 kubelet[1883]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:38:56.680016 kubelet[1883]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:38:56.680016 kubelet[1883]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:38:56.680016 kubelet[1883]: I1101 00:38:56.679505 1883 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:38:56.701270 kubelet[1883]: I1101 00:38:56.701198 1883 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:38:56.701270 kubelet[1883]: I1101 00:38:56.701250 1883 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:38:56.702475 kubelet[1883]: I1101 00:38:56.702366 1883 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:38:56.711888 sudo[1897]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:38:56.712161 sudo[1897]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:38:56.715812 kubelet[1883]: I1101 00:38:56.714773 1883 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:38:56.726997 kubelet[1883]: I1101 00:38:56.726951 1883 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:38:56.735658 kubelet[1883]: E1101 00:38:56.735590 1883 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:38:56.735658 kubelet[1883]: I1101 00:38:56.735651 1883 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:38:56.742893 kubelet[1883]: I1101 00:38:56.742834 1883 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:38:56.743750 kubelet[1883]: I1101 00:38:56.743660 1883 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:38:56.744289 kubelet[1883]: I1101 00:38:56.743928 1883 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-368ce9a156","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:38:56.744591 kubelet[1883]: I1101 00:38:56.744515 1883 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:38:56.744732 kubelet[1883]: I1101 00:38:56.744714 1883 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:38:56.746956 kubelet[1883]: I1101 00:38:56.746923 1883 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:38:56.747165 kubelet[1883]: I1101 00:38:56.747146 1883 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:38:56.753089 kubelet[1883]: I1101 00:38:56.753040 1883 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:38:56.753089 kubelet[1883]: I1101 00:38:56.753096 1883 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:38:56.753089 kubelet[1883]: I1101 00:38:56.753110 1883 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:38:56.763502 kubelet[1883]: I1101 00:38:56.763454 1883 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:38:56.764015 kubelet[1883]: I1101 00:38:56.763985 1883 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:38:56.764468 kubelet[1883]: I1101 00:38:56.764431 1883 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:38:56.764468 kubelet[1883]: I1101 00:38:56.764466 1883 server.go:1287] "Started kubelet" Nov 1 00:38:56.772813 kubelet[1883]: I1101 00:38:56.772774 1883 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:38:56.773914 kubelet[1883]: I1101 00:38:56.773886 1883 apiserver.go:52] "Watching apiserver" Nov 1 00:38:56.792270 kubelet[1883]: E1101 00:38:56.792226 1883 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:38:56.797236 kubelet[1883]: I1101 00:38:56.797197 1883 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:38:56.799016 kubelet[1883]: I1101 00:38:56.798984 1883 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:38:56.799156 kubelet[1883]: I1101 00:38:56.799140 1883 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:38:56.800740 kubelet[1883]: I1101 00:38:56.800696 1883 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:38:56.804546 kubelet[1883]: I1101 00:38:56.804511 1883 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:38:56.806542 kubelet[1883]: I1101 00:38:56.806483 1883 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:38:56.806775 kubelet[1883]: I1101 00:38:56.806756 1883 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:38:56.807047 kubelet[1883]: I1101 00:38:56.807024 1883 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:38:56.821035 kubelet[1883]: I1101 00:38:56.820760 1883 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:38:56.821035 kubelet[1883]: I1101 00:38:56.820783 1883 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:38:56.821035 kubelet[1883]: I1101 00:38:56.820880 1883 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:38:56.831711 kubelet[1883]: I1101 00:38:56.831653 1883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:38:56.845471 kubelet[1883]: I1101 00:38:56.845424 1883 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:38:56.845471 kubelet[1883]: I1101 00:38:56.845462 1883 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:38:56.845728 kubelet[1883]: I1101 00:38:56.845494 1883 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:38:56.845728 kubelet[1883]: I1101 00:38:56.845506 1883 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:38:56.845728 kubelet[1883]: E1101 00:38:56.845708 1883 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:38:56.911501 kubelet[1883]: I1101 00:38:56.911441 1883 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:38:56.911501 kubelet[1883]: I1101 00:38:56.911468 1883 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:38:56.911501 kubelet[1883]: I1101 00:38:56.911491 1883 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:38:56.911799 kubelet[1883]: I1101 00:38:56.911731 1883 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:38:56.911799 kubelet[1883]: I1101 00:38:56.911743 1883 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:38:56.911799 kubelet[1883]: I1101 00:38:56.911762 1883 policy_none.go:49] "None policy: Start" Nov 1 00:38:56.911799 kubelet[1883]: I1101 00:38:56.911771 1883 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:38:56.911799 kubelet[1883]: I1101 00:38:56.911782 1883 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:38:56.911945 kubelet[1883]: I1101 00:38:56.911900 1883 state_mem.go:75] "Updated machine memory state" Nov 1 00:38:56.918606 kubelet[1883]: I1101 00:38:56.917935 1883 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:38:56.918606 kubelet[1883]: I1101 00:38:56.918231 1883 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:38:56.918606 kubelet[1883]: I1101 00:38:56.918245 1883 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:38:56.919108 kubelet[1883]: I1101 00:38:56.919013 1883 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:38:56.921991 kubelet[1883]: E1101 00:38:56.921896 1883 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:38:56.948675 kubelet[1883]: I1101 00:38:56.948588 1883 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:56.953440 kubelet[1883]: I1101 00:38:56.953391 1883 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:56.962660 kubelet[1883]: W1101 00:38:56.961740 1883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:38:56.983465 kubelet[1883]: W1101 00:38:56.983417 1883 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:38:57.006524 kubelet[1883]: I1101 00:38:57.006466 1883 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:38:57.029944 kubelet[1883]: I1101 00:38:57.029903 1883 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.053373 kubelet[1883]: I1101 00:38:57.053323 1883 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.053662 kubelet[1883]: I1101 00:38:57.053443 1883 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.084381 kubelet[1883]: I1101 00:38:57.084239 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" podStartSLOduration=2.0842197 podStartE2EDuration="2.0842197s" podCreationTimestamp="2025-11-01 00:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:57.039075448 +0000 UTC m=+0.458913053" watchObservedRunningTime="2025-11-01 00:38:57.0842197 +0000 UTC m=+0.504057288" Nov 1 00:38:57.106803 kubelet[1883]: I1101 00:38:57.106735 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.106803 kubelet[1883]: I1101 00:38:57.106792 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.106803 kubelet[1883]: I1101 00:38:57.106815 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0270a638842732e8d45e7783cebfee48-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-368ce9a156\" (UID: \"0270a638842732e8d45e7783cebfee48\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107074 kubelet[1883]: I1101 00:38:57.106834 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107074 kubelet[1883]: I1101 00:38:57.106851 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107074 kubelet[1883]: I1101 00:38:57.106867 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107074 kubelet[1883]: I1101 00:38:57.106884 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107074 kubelet[1883]: I1101 00:38:57.106901 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a6d131c578340116dc6222a2144d19d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-368ce9a156\" (UID: \"8a6d131c578340116dc6222a2144d19d\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.107248 kubelet[1883]: I1101 00:38:57.106922 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc1188887baeed8969b5f5a5236c9082-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-368ce9a156\" (UID: \"fc1188887baeed8969b5f5a5236c9082\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" Nov 1 00:38:57.157433 kubelet[1883]: I1101 00:38:57.157356 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-368ce9a156" podStartSLOduration=1.157335273 podStartE2EDuration="1.157335273s" podCreationTimestamp="2025-11-01 00:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:57.085261298 +0000 UTC m=+0.505098900" watchObservedRunningTime="2025-11-01 00:38:57.157335273 +0000 UTC m=+0.577172867" Nov 1 00:38:57.254564 kubelet[1883]: E1101 00:38:57.254504 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:57.262441 kubelet[1883]: E1101 00:38:57.262368 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:57.284851 kubelet[1883]: E1101 00:38:57.284792 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:57.671538 sudo[1897]: pam_unix(sudo:session): session closed for user root Nov 1 00:38:57.885160 kubelet[1883]: E1101 00:38:57.885107 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:57.885947 kubelet[1883]: E1101 00:38:57.885917 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:57.905731 kubelet[1883]: I1101 00:38:57.905664 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-368ce9a156" podStartSLOduration=1.90564505 podStartE2EDuration="1.90564505s" podCreationTimestamp="2025-11-01 00:38:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:57.158168803 +0000 UTC m=+0.578006401" watchObservedRunningTime="2025-11-01 00:38:57.90564505 +0000 UTC m=+1.325482654" Nov 1 00:38:58.256118 kubelet[1883]: E1101 00:38:58.256076 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:58.887139 kubelet[1883]: E1101 00:38:58.887077 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:58.889112 kubelet[1883]: E1101 00:38:58.888002 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:38:59.557765 sudo[1292]: pam_unix(sudo:session): session closed for user root Nov 1 00:38:59.563878 sshd[1289]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:59.567845 systemd-logind[1177]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:38:59.568591 systemd[1]: sshd@4-146.190.139.75:22-139.178.89.65:52686.service: Deactivated successfully. Nov 1 00:38:59.569391 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:38:59.569583 systemd[1]: session-5.scope: Consumed 5.734s CPU time. Nov 1 00:38:59.570486 systemd-logind[1177]: Removed session 5. Nov 1 00:38:59.680635 kubelet[1883]: I1101 00:38:59.680582 1883 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:38:59.681291 env[1183]: time="2025-11-01T00:38:59.681224474Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:38:59.681686 kubelet[1883]: I1101 00:38:59.681490 1883 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:39:00.250496 kubelet[1883]: E1101 00:39:00.250455 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:00.633117 systemd[1]: Created slice kubepods-besteffort-podbaad0840_2145_4f9c_b415_8723bc4dd2b6.slice. Nov 1 00:39:00.644248 systemd[1]: Created slice kubepods-burstable-pod5ec39fd0_dc62_4162_bce5_cc595ded4176.slice. Nov 1 00:39:00.731329 kubelet[1883]: I1101 00:39:00.731216 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/baad0840-2145-4f9c-b415-8723bc4dd2b6-lib-modules\") pod \"kube-proxy-stgjr\" (UID: \"baad0840-2145-4f9c-b415-8723bc4dd2b6\") " pod="kube-system/kube-proxy-stgjr" Nov 1 00:39:00.731584 kubelet[1883]: I1101 00:39:00.731340 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-run\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731584 kubelet[1883]: I1101 00:39:00.731379 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-lib-modules\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731584 kubelet[1883]: I1101 00:39:00.731430 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/baad0840-2145-4f9c-b415-8723bc4dd2b6-kube-proxy\") pod \"kube-proxy-stgjr\" (UID: \"baad0840-2145-4f9c-b415-8723bc4dd2b6\") " pod="kube-system/kube-proxy-stgjr" Nov 1 00:39:00.731584 kubelet[1883]: I1101 00:39:00.731460 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-kernel\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731584 kubelet[1883]: I1101 00:39:00.731500 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pckp\" (UniqueName: \"kubernetes.io/projected/baad0840-2145-4f9c-b415-8723bc4dd2b6-kube-api-access-9pckp\") pod \"kube-proxy-stgjr\" (UID: \"baad0840-2145-4f9c-b415-8723bc4dd2b6\") " pod="kube-system/kube-proxy-stgjr" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731523 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-cgroup\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731539 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-net\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731568 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqhj\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-kube-api-access-9tqhj\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731584 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-xtables-lock\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731599 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ec39fd0-dc62-4162-bce5-cc595ded4176-clustermesh-secrets\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.731857 kubelet[1883]: I1101 00:39:00.731642 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-bpf-maps\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731659 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cni-path\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731677 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-etc-cni-netd\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731695 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/baad0840-2145-4f9c-b415-8723bc4dd2b6-xtables-lock\") pod \"kube-proxy-stgjr\" (UID: \"baad0840-2145-4f9c-b415-8723bc4dd2b6\") " pod="kube-system/kube-proxy-stgjr" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731725 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-hostproc\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731756 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-hubble-tls\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.732127 kubelet[1883]: I1101 00:39:00.731789 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-config-path\") pod \"cilium-mxsh7\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " pod="kube-system/cilium-mxsh7" Nov 1 00:39:00.834101 kubelet[1883]: I1101 00:39:00.834040 1883 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:39:00.873202 systemd[1]: Created slice kubepods-besteffort-pod5485e9b1_7b73_471f_a2b5_fa031b8ca2cc.slice. Nov 1 00:39:00.892159 kubelet[1883]: E1101 00:39:00.892011 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:00.941979 kubelet[1883]: E1101 00:39:00.941910 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:00.943143 env[1183]: time="2025-11-01T00:39:00.943080584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stgjr,Uid:baad0840-2145-4f9c-b415-8723bc4dd2b6,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:00.947955 kubelet[1883]: E1101 00:39:00.947910 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:00.948794 env[1183]: time="2025-11-01T00:39:00.948737886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxsh7,Uid:5ec39fd0-dc62-4162-bce5-cc595ded4176,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:00.980376 env[1183]: time="2025-11-01T00:39:00.980248950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:00.980556 env[1183]: time="2025-11-01T00:39:00.980415360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:00.980687 env[1183]: time="2025-11-01T00:39:00.980511604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:00.981023 env[1183]: time="2025-11-01T00:39:00.980930467Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/403b846a1936f99aa61d8db56aaf854832b474c4a7dc204b770b020eaa19e491 pid=1970 runtime=io.containerd.runc.v2 Nov 1 00:39:00.985559 env[1183]: time="2025-11-01T00:39:00.985432092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:00.985559 env[1183]: time="2025-11-01T00:39:00.985484355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:00.985559 env[1183]: time="2025-11-01T00:39:00.985495670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:00.987704 env[1183]: time="2025-11-01T00:39:00.987107485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01 pid=1971 runtime=io.containerd.runc.v2 Nov 1 00:39:01.004031 systemd[1]: Started cri-containerd-403b846a1936f99aa61d8db56aaf854832b474c4a7dc204b770b020eaa19e491.scope. Nov 1 00:39:01.034037 kubelet[1883]: I1101 00:39:01.033887 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zttvf\" (UID: \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\") " pod="kube-system/cilium-operator-6c4d7847fc-zttvf" Nov 1 00:39:01.034037 kubelet[1883]: I1101 00:39:01.033950 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnvkz\" (UniqueName: \"kubernetes.io/projected/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-kube-api-access-xnvkz\") pod \"cilium-operator-6c4d7847fc-zttvf\" (UID: \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\") " pod="kube-system/cilium-operator-6c4d7847fc-zttvf" Nov 1 00:39:01.036816 systemd[1]: Started cri-containerd-e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01.scope. Nov 1 00:39:01.079231 env[1183]: time="2025-11-01T00:39:01.079164773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stgjr,Uid:baad0840-2145-4f9c-b415-8723bc4dd2b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"403b846a1936f99aa61d8db56aaf854832b474c4a7dc204b770b020eaa19e491\"" Nov 1 00:39:01.081173 kubelet[1883]: E1101 00:39:01.080450 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.092116 env[1183]: time="2025-11-01T00:39:01.091977143Z" level=info msg="CreateContainer within sandbox \"403b846a1936f99aa61d8db56aaf854832b474c4a7dc204b770b020eaa19e491\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:39:01.099358 env[1183]: time="2025-11-01T00:39:01.099205040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mxsh7,Uid:5ec39fd0-dc62-4162-bce5-cc595ded4176,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\"" Nov 1 00:39:01.100701 kubelet[1883]: E1101 00:39:01.100517 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.107870 env[1183]: time="2025-11-01T00:39:01.107805394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:39:01.128663 env[1183]: time="2025-11-01T00:39:01.128075946Z" level=info msg="CreateContainer within sandbox \"403b846a1936f99aa61d8db56aaf854832b474c4a7dc204b770b020eaa19e491\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ec76891a8d21a468c02d7c896dcfc16971f475344d58213a41774e05d2c69d4f\"" Nov 1 00:39:01.129939 env[1183]: time="2025-11-01T00:39:01.129880168Z" level=info msg="StartContainer for \"ec76891a8d21a468c02d7c896dcfc16971f475344d58213a41774e05d2c69d4f\"" Nov 1 00:39:01.178046 systemd[1]: Started cri-containerd-ec76891a8d21a468c02d7c896dcfc16971f475344d58213a41774e05d2c69d4f.scope. Nov 1 00:39:01.178819 kubelet[1883]: E1101 00:39:01.178791 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.182532 env[1183]: time="2025-11-01T00:39:01.182485765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zttvf,Uid:5485e9b1-7b73-471f-a2b5-fa031b8ca2cc,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:01.217334 env[1183]: time="2025-11-01T00:39:01.216743407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:01.217334 env[1183]: time="2025-11-01T00:39:01.216867824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:01.217334 env[1183]: time="2025-11-01T00:39:01.216891036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:01.217850 env[1183]: time="2025-11-01T00:39:01.217691331Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8 pid=2075 runtime=io.containerd.runc.v2 Nov 1 00:39:01.251447 systemd[1]: Started cri-containerd-5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8.scope. Nov 1 00:39:01.284782 env[1183]: time="2025-11-01T00:39:01.284598592Z" level=info msg="StartContainer for \"ec76891a8d21a468c02d7c896dcfc16971f475344d58213a41774e05d2c69d4f\" returns successfully" Nov 1 00:39:01.336756 env[1183]: time="2025-11-01T00:39:01.336657037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zttvf,Uid:5485e9b1-7b73-471f-a2b5-fa031b8ca2cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\"" Nov 1 00:39:01.338049 kubelet[1883]: E1101 00:39:01.338002 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.746343 kubelet[1883]: E1101 00:39:01.746298 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.901121 kubelet[1883]: E1101 00:39:01.901060 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.902020 kubelet[1883]: E1101 00:39:01.901977 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:01.964111 kubelet[1883]: I1101 00:39:01.964018 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stgjr" podStartSLOduration=1.963991741 podStartE2EDuration="1.963991741s" podCreationTimestamp="2025-11-01 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:39:01.934771146 +0000 UTC m=+5.354608760" watchObservedRunningTime="2025-11-01 00:39:01.963991741 +0000 UTC m=+5.383829361" Nov 1 00:39:02.959339 kubelet[1883]: E1101 00:39:02.958485 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:02.986949 kubelet[1883]: E1101 00:39:02.986858 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:03.972302 systemd[1]: Started sshd@5-146.190.139.75:22-196.251.114.29:51824.service. Nov 1 00:39:03.993793 kubelet[1883]: E1101 00:39:03.988587 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:04.149659 sshd[2248]: kex_exchange_identification: Connection closed by remote host Nov 1 00:39:04.149659 sshd[2248]: Connection closed by 196.251.114.29 port 51824 Nov 1 00:39:04.150837 systemd[1]: sshd@5-146.190.139.75:22-196.251.114.29:51824.service: Deactivated successfully. Nov 1 00:39:07.865106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3248262486.mount: Deactivated successfully. Nov 1 00:39:10.390178 update_engine[1178]: I1101 00:39:10.390096 1178 update_attempter.cc:509] Updating boot flags... Nov 1 00:39:12.113936 env[1183]: time="2025-11-01T00:39:12.113861803Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:12.117166 env[1183]: time="2025-11-01T00:39:12.117096177Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:12.119672 env[1183]: time="2025-11-01T00:39:12.119592469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:12.120911 env[1183]: time="2025-11-01T00:39:12.120841059Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 00:39:12.124732 env[1183]: time="2025-11-01T00:39:12.124660217Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:39:12.127019 env[1183]: time="2025-11-01T00:39:12.125019157Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:39:12.153352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704121388.mount: Deactivated successfully. Nov 1 00:39:12.163463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991911186.mount: Deactivated successfully. Nov 1 00:39:12.171753 env[1183]: time="2025-11-01T00:39:12.171662691Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\"" Nov 1 00:39:12.175078 env[1183]: time="2025-11-01T00:39:12.175018254Z" level=info msg="StartContainer for \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\"" Nov 1 00:39:12.204467 systemd[1]: Started cri-containerd-7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5.scope. Nov 1 00:39:12.275368 systemd[1]: cri-containerd-7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5.scope: Deactivated successfully. Nov 1 00:39:12.295997 env[1183]: time="2025-11-01T00:39:12.295922732Z" level=info msg="StartContainer for \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\" returns successfully" Nov 1 00:39:12.329341 env[1183]: time="2025-11-01T00:39:12.329272339Z" level=info msg="shim disconnected" id=7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5 Nov 1 00:39:12.329725 env[1183]: time="2025-11-01T00:39:12.329697610Z" level=warning msg="cleaning up after shim disconnected" id=7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5 namespace=k8s.io Nov 1 00:39:12.329852 env[1183]: time="2025-11-01T00:39:12.329829507Z" level=info msg="cleaning up dead shim" Nov 1 00:39:12.341265 env[1183]: time="2025-11-01T00:39:12.341197116Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2313 runtime=io.containerd.runc.v2\n" Nov 1 00:39:13.014885 kubelet[1883]: E1101 00:39:13.014831 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:13.021838 env[1183]: time="2025-11-01T00:39:13.019499828Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:39:13.047947 env[1183]: time="2025-11-01T00:39:13.047856013Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\"" Nov 1 00:39:13.049941 env[1183]: time="2025-11-01T00:39:13.049871560Z" level=info msg="StartContainer for \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\"" Nov 1 00:39:13.070155 systemd[1]: Started cri-containerd-a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6.scope. Nov 1 00:39:13.120687 env[1183]: time="2025-11-01T00:39:13.120580631Z" level=info msg="StartContainer for \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\" returns successfully" Nov 1 00:39:13.143756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:39:13.144915 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:39:13.145269 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:39:13.148809 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:39:13.161087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5-rootfs.mount: Deactivated successfully. Nov 1 00:39:13.161218 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:39:13.175822 systemd[1]: cri-containerd-a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6.scope: Deactivated successfully. Nov 1 00:39:13.186940 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:39:13.212299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6-rootfs.mount: Deactivated successfully. Nov 1 00:39:13.219855 env[1183]: time="2025-11-01T00:39:13.219780664Z" level=info msg="shim disconnected" id=a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6 Nov 1 00:39:13.219855 env[1183]: time="2025-11-01T00:39:13.219848166Z" level=warning msg="cleaning up after shim disconnected" id=a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6 namespace=k8s.io Nov 1 00:39:13.219855 env[1183]: time="2025-11-01T00:39:13.219864060Z" level=info msg="cleaning up dead shim" Nov 1 00:39:13.232533 env[1183]: time="2025-11-01T00:39:13.232463971Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2378 runtime=io.containerd.runc.v2\n" Nov 1 00:39:13.535783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3613118024.mount: Deactivated successfully. Nov 1 00:39:14.020013 kubelet[1883]: E1101 00:39:14.019933 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:14.045216 env[1183]: time="2025-11-01T00:39:14.045146928Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:39:14.067549 env[1183]: time="2025-11-01T00:39:14.067486294Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\"" Nov 1 00:39:14.073238 env[1183]: time="2025-11-01T00:39:14.073176882Z" level=info msg="StartContainer for \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\"" Nov 1 00:39:14.093550 systemd[1]: Started cri-containerd-0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee.scope. Nov 1 00:39:14.141465 systemd[1]: cri-containerd-0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee.scope: Deactivated successfully. Nov 1 00:39:14.143538 env[1183]: time="2025-11-01T00:39:14.143489655Z" level=info msg="StartContainer for \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\" returns successfully" Nov 1 00:39:14.201992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee-rootfs.mount: Deactivated successfully. Nov 1 00:39:14.260595 env[1183]: time="2025-11-01T00:39:14.260518909Z" level=info msg="shim disconnected" id=0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee Nov 1 00:39:14.260595 env[1183]: time="2025-11-01T00:39:14.260581314Z" level=warning msg="cleaning up after shim disconnected" id=0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee namespace=k8s.io Nov 1 00:39:14.260595 env[1183]: time="2025-11-01T00:39:14.260594806Z" level=info msg="cleaning up dead shim" Nov 1 00:39:14.281149 env[1183]: time="2025-11-01T00:39:14.280484269Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2438 runtime=io.containerd.runc.v2\n" Nov 1 00:39:14.540477 env[1183]: time="2025-11-01T00:39:14.539977402Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:14.542025 env[1183]: time="2025-11-01T00:39:14.541978880Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:14.544530 env[1183]: time="2025-11-01T00:39:14.544488693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:39:14.545374 env[1183]: time="2025-11-01T00:39:14.545324316Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 00:39:14.552743 env[1183]: time="2025-11-01T00:39:14.552690246Z" level=info msg="CreateContainer within sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:39:14.567233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039554667.mount: Deactivated successfully. Nov 1 00:39:14.577854 env[1183]: time="2025-11-01T00:39:14.577779618Z" level=info msg="CreateContainer within sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\"" Nov 1 00:39:14.581209 env[1183]: time="2025-11-01T00:39:14.581147424Z" level=info msg="StartContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\"" Nov 1 00:39:14.607451 systemd[1]: Started cri-containerd-a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa.scope. Nov 1 00:39:14.654409 env[1183]: time="2025-11-01T00:39:14.654348228Z" level=info msg="StartContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" returns successfully" Nov 1 00:39:15.025076 kubelet[1883]: E1101 00:39:15.025016 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:15.029148 kubelet[1883]: E1101 00:39:15.028951 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:15.031773 env[1183]: time="2025-11-01T00:39:15.031722418Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:39:15.052334 env[1183]: time="2025-11-01T00:39:15.052249174Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\"" Nov 1 00:39:15.053229 env[1183]: time="2025-11-01T00:39:15.053191411Z" level=info msg="StartContainer for \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\"" Nov 1 00:39:15.080116 systemd[1]: Started cri-containerd-f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e.scope. Nov 1 00:39:15.143704 env[1183]: time="2025-11-01T00:39:15.143637360Z" level=info msg="StartContainer for \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\" returns successfully" Nov 1 00:39:15.162297 systemd[1]: cri-containerd-f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e.scope: Deactivated successfully. Nov 1 00:39:15.169607 kubelet[1883]: I1101 00:39:15.169512 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zttvf" podStartSLOduration=1.9635491379999999 podStartE2EDuration="15.169469709s" podCreationTimestamp="2025-11-01 00:39:00 +0000 UTC" firstStartedPulling="2025-11-01 00:39:01.341513601 +0000 UTC m=+4.761351186" lastFinishedPulling="2025-11-01 00:39:14.547434167 +0000 UTC m=+17.967271757" observedRunningTime="2025-11-01 00:39:15.065157488 +0000 UTC m=+18.484995086" watchObservedRunningTime="2025-11-01 00:39:15.169469709 +0000 UTC m=+18.589307321" Nov 1 00:39:15.190800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e-rootfs.mount: Deactivated successfully. Nov 1 00:39:15.218463 env[1183]: time="2025-11-01T00:39:15.218405103Z" level=info msg="shim disconnected" id=f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e Nov 1 00:39:15.218782 env[1183]: time="2025-11-01T00:39:15.218757331Z" level=warning msg="cleaning up after shim disconnected" id=f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e namespace=k8s.io Nov 1 00:39:15.218881 env[1183]: time="2025-11-01T00:39:15.218864631Z" level=info msg="cleaning up dead shim" Nov 1 00:39:15.236348 env[1183]: time="2025-11-01T00:39:15.236286057Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:39:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2533 runtime=io.containerd.runc.v2\n" Nov 1 00:39:16.033324 kubelet[1883]: E1101 00:39:16.033284 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:16.034763 kubelet[1883]: E1101 00:39:16.034739 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:16.037157 env[1183]: time="2025-11-01T00:39:16.037109152Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:39:16.061891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4081965974.mount: Deactivated successfully. Nov 1 00:39:16.073941 env[1183]: time="2025-11-01T00:39:16.073875730Z" level=info msg="CreateContainer within sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\"" Nov 1 00:39:16.076235 env[1183]: time="2025-11-01T00:39:16.075725878Z" level=info msg="StartContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\"" Nov 1 00:39:16.096336 systemd[1]: Started cri-containerd-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27.scope. Nov 1 00:39:16.145874 env[1183]: time="2025-11-01T00:39:16.145813461Z" level=info msg="StartContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" returns successfully" Nov 1 00:39:16.182694 systemd[1]: run-containerd-runc-k8s.io-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27-runc.0Fe8jg.mount: Deactivated successfully. Nov 1 00:39:16.341979 kubelet[1883]: I1101 00:39:16.341832 1883 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:39:16.393255 systemd[1]: Created slice kubepods-burstable-podb357552f_026b_4f10_a6de_054d3b7b2174.slice. Nov 1 00:39:16.406555 systemd[1]: Created slice kubepods-burstable-poda8abefc8_0d52_4edf_b880_213c7296f8ad.slice. Nov 1 00:39:16.567890 kubelet[1883]: I1101 00:39:16.567814 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b357552f-026b-4f10-a6de-054d3b7b2174-config-volume\") pod \"coredns-668d6bf9bc-bfgdt\" (UID: \"b357552f-026b-4f10-a6de-054d3b7b2174\") " pod="kube-system/coredns-668d6bf9bc-bfgdt" Nov 1 00:39:16.567890 kubelet[1883]: I1101 00:39:16.567869 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8abefc8-0d52-4edf-b880-213c7296f8ad-config-volume\") pod \"coredns-668d6bf9bc-5dhws\" (UID: \"a8abefc8-0d52-4edf-b880-213c7296f8ad\") " pod="kube-system/coredns-668d6bf9bc-5dhws" Nov 1 00:39:16.567890 kubelet[1883]: I1101 00:39:16.567898 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2f8\" (UniqueName: \"kubernetes.io/projected/b357552f-026b-4f10-a6de-054d3b7b2174-kube-api-access-8h2f8\") pod \"coredns-668d6bf9bc-bfgdt\" (UID: \"b357552f-026b-4f10-a6de-054d3b7b2174\") " pod="kube-system/coredns-668d6bf9bc-bfgdt" Nov 1 00:39:16.568245 kubelet[1883]: I1101 00:39:16.567919 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97gnf\" (UniqueName: \"kubernetes.io/projected/a8abefc8-0d52-4edf-b880-213c7296f8ad-kube-api-access-97gnf\") pod \"coredns-668d6bf9bc-5dhws\" (UID: \"a8abefc8-0d52-4edf-b880-213c7296f8ad\") " pod="kube-system/coredns-668d6bf9bc-5dhws" Nov 1 00:39:16.697292 kubelet[1883]: E1101 00:39:16.697220 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:16.700095 env[1183]: time="2025-11-01T00:39:16.698247917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfgdt,Uid:b357552f-026b-4f10-a6de-054d3b7b2174,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:16.710786 kubelet[1883]: E1101 00:39:16.710725 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:16.712018 env[1183]: time="2025-11-01T00:39:16.711949152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5dhws,Uid:a8abefc8-0d52-4edf-b880-213c7296f8ad,Namespace:kube-system,Attempt:0,}" Nov 1 00:39:17.038531 kubelet[1883]: E1101 00:39:17.038398 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:17.065259 kubelet[1883]: I1101 00:39:17.065165 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mxsh7" podStartSLOduration=6.046627519 podStartE2EDuration="17.065144728s" podCreationTimestamp="2025-11-01 00:39:00 +0000 UTC" firstStartedPulling="2025-11-01 00:39:01.104497184 +0000 UTC m=+4.524334758" lastFinishedPulling="2025-11-01 00:39:12.123014358 +0000 UTC m=+15.542851967" observedRunningTime="2025-11-01 00:39:17.062375272 +0000 UTC m=+20.482212872" watchObservedRunningTime="2025-11-01 00:39:17.065144728 +0000 UTC m=+20.484982327" Nov 1 00:39:18.040400 kubelet[1883]: E1101 00:39:18.040355 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:18.799177 systemd-networkd[997]: cilium_host: Link UP Nov 1 00:39:18.800187 systemd-networkd[997]: cilium_net: Link UP Nov 1 00:39:18.801923 systemd-networkd[997]: cilium_net: Gained carrier Nov 1 00:39:18.804432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:39:18.804535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:39:18.804072 systemd-networkd[997]: cilium_host: Gained carrier Nov 1 00:39:18.988496 systemd-networkd[997]: cilium_vxlan: Link UP Nov 1 00:39:18.988723 systemd-networkd[997]: cilium_vxlan: Gained carrier Nov 1 00:39:19.043476 kubelet[1883]: E1101 00:39:19.043417 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:19.358674 kernel: NET: Registered PF_ALG protocol family Nov 1 00:39:19.591838 systemd-networkd[997]: cilium_host: Gained IPv6LL Nov 1 00:39:19.718861 systemd-networkd[997]: cilium_net: Gained IPv6LL Nov 1 00:39:20.302188 systemd-networkd[997]: lxc_health: Link UP Nov 1 00:39:20.318459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:39:20.319483 systemd-networkd[997]: lxc_health: Gained carrier Nov 1 00:39:20.678983 systemd-networkd[997]: cilium_vxlan: Gained IPv6LL Nov 1 00:39:20.789512 systemd-networkd[997]: lxc30bd2343002d: Link UP Nov 1 00:39:20.804841 kernel: eth0: renamed from tmp52be1 Nov 1 00:39:20.816906 systemd-networkd[997]: lxcbe6a1c3877e4: Link UP Nov 1 00:39:20.822705 kernel: eth0: renamed from tmpfc0bc Nov 1 00:39:20.828785 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc30bd2343002d: link becomes ready Nov 1 00:39:20.828793 systemd-networkd[997]: lxc30bd2343002d: Gained carrier Nov 1 00:39:20.837082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbe6a1c3877e4: link becomes ready Nov 1 00:39:20.839449 systemd-networkd[997]: lxcbe6a1c3877e4: Gained carrier Nov 1 00:39:20.953713 kubelet[1883]: E1101 00:39:20.953240 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:21.959011 systemd-networkd[997]: lxc_health: Gained IPv6LL Nov 1 00:39:22.087131 systemd-networkd[997]: lxc30bd2343002d: Gained IPv6LL Nov 1 00:39:22.726919 systemd-networkd[997]: lxcbe6a1c3877e4: Gained IPv6LL Nov 1 00:39:25.671849 env[1183]: time="2025-11-01T00:39:25.666843621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:25.671849 env[1183]: time="2025-11-01T00:39:25.666913707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:25.671849 env[1183]: time="2025-11-01T00:39:25.666924892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:25.671849 env[1183]: time="2025-11-01T00:39:25.667229186Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc0bcc60b85a86fe63c09cf2fdd43de3865e9e799a7ba26566ad4a339e30a792 pid=3097 runtime=io.containerd.runc.v2 Nov 1 00:39:25.687078 env[1183]: time="2025-11-01T00:39:25.686888293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:39:25.687287 env[1183]: time="2025-11-01T00:39:25.687085512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:39:25.687287 env[1183]: time="2025-11-01T00:39:25.687125656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:39:25.687488 env[1183]: time="2025-11-01T00:39:25.687418872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00 pid=3107 runtime=io.containerd.runc.v2 Nov 1 00:39:25.707382 systemd[1]: Started cri-containerd-fc0bcc60b85a86fe63c09cf2fdd43de3865e9e799a7ba26566ad4a339e30a792.scope. Nov 1 00:39:25.734759 systemd[1]: Started cri-containerd-52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00.scope. Nov 1 00:39:25.829964 env[1183]: time="2025-11-01T00:39:25.826650023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bfgdt,Uid:b357552f-026b-4f10-a6de-054d3b7b2174,Namespace:kube-system,Attempt:0,} returns sandbox id \"52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00\"" Nov 1 00:39:25.832772 kubelet[1883]: E1101 00:39:25.832128 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:25.844639 env[1183]: time="2025-11-01T00:39:25.844529743Z" level=info msg="CreateContainer within sandbox \"52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:39:25.856180 env[1183]: time="2025-11-01T00:39:25.856114194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5dhws,Uid:a8abefc8-0d52-4edf-b880-213c7296f8ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc0bcc60b85a86fe63c09cf2fdd43de3865e9e799a7ba26566ad4a339e30a792\"" Nov 1 00:39:25.857886 kubelet[1883]: E1101 00:39:25.857319 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:25.863507 env[1183]: time="2025-11-01T00:39:25.863451942Z" level=info msg="CreateContainer within sandbox \"fc0bcc60b85a86fe63c09cf2fdd43de3865e9e799a7ba26566ad4a339e30a792\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:39:25.882824 env[1183]: time="2025-11-01T00:39:25.882758079Z" level=info msg="CreateContainer within sandbox \"52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2973d84db6d6bc259374a3bbe3a5293ae6aee59e572a00d544c1aa054c5bd17a\"" Nov 1 00:39:25.884997 env[1183]: time="2025-11-01T00:39:25.884928291Z" level=info msg="StartContainer for \"2973d84db6d6bc259374a3bbe3a5293ae6aee59e572a00d544c1aa054c5bd17a\"" Nov 1 00:39:25.897639 env[1183]: time="2025-11-01T00:39:25.897557841Z" level=info msg="CreateContainer within sandbox \"fc0bcc60b85a86fe63c09cf2fdd43de3865e9e799a7ba26566ad4a339e30a792\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27cb6086a943a3ee8d5490f9e5c36c4d79d85ba6c38a9e67b86bdc13fe63aa9c\"" Nov 1 00:39:25.900178 env[1183]: time="2025-11-01T00:39:25.900089605Z" level=info msg="StartContainer for \"27cb6086a943a3ee8d5490f9e5c36c4d79d85ba6c38a9e67b86bdc13fe63aa9c\"" Nov 1 00:39:25.923388 systemd[1]: Started cri-containerd-2973d84db6d6bc259374a3bbe3a5293ae6aee59e572a00d544c1aa054c5bd17a.scope. Nov 1 00:39:25.951945 systemd[1]: Started cri-containerd-27cb6086a943a3ee8d5490f9e5c36c4d79d85ba6c38a9e67b86bdc13fe63aa9c.scope. Nov 1 00:39:25.989896 env[1183]: time="2025-11-01T00:39:25.989831897Z" level=info msg="StartContainer for \"2973d84db6d6bc259374a3bbe3a5293ae6aee59e572a00d544c1aa054c5bd17a\" returns successfully" Nov 1 00:39:26.002004 env[1183]: time="2025-11-01T00:39:26.001918766Z" level=info msg="StartContainer for \"27cb6086a943a3ee8d5490f9e5c36c4d79d85ba6c38a9e67b86bdc13fe63aa9c\" returns successfully" Nov 1 00:39:26.062959 kubelet[1883]: E1101 00:39:26.062914 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:26.067766 kubelet[1883]: E1101 00:39:26.067705 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:26.101220 kubelet[1883]: I1101 00:39:26.101138 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5dhws" podStartSLOduration=26.101091775 podStartE2EDuration="26.101091775s" podCreationTimestamp="2025-11-01 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:39:26.095285873 +0000 UTC m=+29.515123473" watchObservedRunningTime="2025-11-01 00:39:26.101091775 +0000 UTC m=+29.520929384" Nov 1 00:39:26.128881 kubelet[1883]: I1101 00:39:26.128765 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bfgdt" podStartSLOduration=26.12873841 podStartE2EDuration="26.12873841s" podCreationTimestamp="2025-11-01 00:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:39:26.126690789 +0000 UTC m=+29.546528396" watchObservedRunningTime="2025-11-01 00:39:26.12873841 +0000 UTC m=+29.548576014" Nov 1 00:39:26.681268 systemd[1]: run-containerd-runc-k8s.io-52be13b149abd80ab8b02e73bd29861bd8c5baa5cb0cfc5d33cc56dd3b33fb00-runc.XNl9tA.mount: Deactivated successfully. Nov 1 00:39:27.070764 kubelet[1883]: E1101 00:39:27.070038 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:27.071844 kubelet[1883]: E1101 00:39:27.070990 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:27.935651 kubelet[1883]: I1101 00:39:27.935588 1883 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:39:27.936232 kubelet[1883]: E1101 00:39:27.936208 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:28.072536 kubelet[1883]: E1101 00:39:28.072497 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:28.074309 kubelet[1883]: E1101 00:39:28.073656 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:28.074680 kubelet[1883]: E1101 00:39:28.073867 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:39:34.955493 systemd[1]: Started sshd@6-146.190.139.75:22-139.178.89.65:34970.service. Nov 1 00:39:35.020831 sshd[3256]: Accepted publickey for core from 139.178.89.65 port 34970 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:35.022976 sshd[3256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:35.030661 systemd[1]: Started session-6.scope. Nov 1 00:39:35.031606 systemd-logind[1177]: New session 6 of user core. Nov 1 00:39:35.271571 sshd[3256]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:35.276207 systemd[1]: sshd@6-146.190.139.75:22-139.178.89.65:34970.service: Deactivated successfully. Nov 1 00:39:35.277120 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:39:35.278379 systemd-logind[1177]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:39:35.279744 systemd-logind[1177]: Removed session 6. Nov 1 00:39:40.279678 systemd[1]: Started sshd@7-146.190.139.75:22-139.178.89.65:51802.service. Nov 1 00:39:40.332855 sshd[3269]: Accepted publickey for core from 139.178.89.65 port 51802 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:40.335209 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:40.341557 systemd-logind[1177]: New session 7 of user core. Nov 1 00:39:40.342857 systemd[1]: Started session-7.scope. Nov 1 00:39:40.516002 sshd[3269]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:40.519326 systemd[1]: sshd@7-146.190.139.75:22-139.178.89.65:51802.service: Deactivated successfully. Nov 1 00:39:40.520104 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:39:40.521001 systemd-logind[1177]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:39:40.522224 systemd-logind[1177]: Removed session 7. Nov 1 00:39:45.525364 systemd[1]: Started sshd@8-146.190.139.75:22-139.178.89.65:51808.service. Nov 1 00:39:45.582975 sshd[3283]: Accepted publickey for core from 139.178.89.65 port 51808 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:45.585262 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:45.591675 systemd[1]: Started session-8.scope. Nov 1 00:39:45.592131 systemd-logind[1177]: New session 8 of user core. Nov 1 00:39:45.757946 sshd[3283]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:45.761812 systemd-logind[1177]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:39:45.762123 systemd[1]: sshd@8-146.190.139.75:22-139.178.89.65:51808.service: Deactivated successfully. Nov 1 00:39:45.762968 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:39:45.764104 systemd-logind[1177]: Removed session 8. Nov 1 00:39:50.763886 systemd[1]: Started sshd@9-146.190.139.75:22-139.178.89.65:60698.service. Nov 1 00:39:50.812722 sshd[3297]: Accepted publickey for core from 139.178.89.65 port 60698 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:50.815137 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:50.821667 systemd-logind[1177]: New session 9 of user core. Nov 1 00:39:50.821802 systemd[1]: Started session-9.scope. Nov 1 00:39:50.987542 sshd[3297]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:50.995167 systemd[1]: Started sshd@10-146.190.139.75:22-139.178.89.65:60704.service. Nov 1 00:39:50.996041 systemd[1]: sshd@9-146.190.139.75:22-139.178.89.65:60698.service: Deactivated successfully. Nov 1 00:39:50.997056 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:39:50.998092 systemd-logind[1177]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:39:51.000031 systemd-logind[1177]: Removed session 9. Nov 1 00:39:51.062033 sshd[3308]: Accepted publickey for core from 139.178.89.65 port 60704 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:51.064443 sshd[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:51.069556 systemd-logind[1177]: New session 10 of user core. Nov 1 00:39:51.070079 systemd[1]: Started session-10.scope. Nov 1 00:39:51.318063 systemd[1]: Started sshd@11-146.190.139.75:22-139.178.89.65:60720.service. Nov 1 00:39:51.319574 sshd[3308]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:51.330155 systemd[1]: sshd@10-146.190.139.75:22-139.178.89.65:60704.service: Deactivated successfully. Nov 1 00:39:51.331289 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:39:51.332248 systemd-logind[1177]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:39:51.336153 systemd-logind[1177]: Removed session 10. Nov 1 00:39:51.395470 sshd[3318]: Accepted publickey for core from 139.178.89.65 port 60720 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:51.397527 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:51.403610 systemd[1]: Started session-11.scope. Nov 1 00:39:51.403713 systemd-logind[1177]: New session 11 of user core. Nov 1 00:39:51.602003 sshd[3318]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:51.605542 systemd[1]: sshd@11-146.190.139.75:22-139.178.89.65:60720.service: Deactivated successfully. Nov 1 00:39:51.606437 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:39:51.607388 systemd-logind[1177]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:39:51.608659 systemd-logind[1177]: Removed session 11. Nov 1 00:39:56.612243 systemd[1]: Started sshd@12-146.190.139.75:22-139.178.89.65:38986.service. Nov 1 00:39:56.672726 sshd[3332]: Accepted publickey for core from 139.178.89.65 port 38986 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:39:56.673464 sshd[3332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:39:56.679685 systemd[1]: Started session-12.scope. Nov 1 00:39:56.680714 systemd-logind[1177]: New session 12 of user core. Nov 1 00:39:56.850051 sshd[3332]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:56.853799 systemd[1]: sshd@12-146.190.139.75:22-139.178.89.65:38986.service: Deactivated successfully. Nov 1 00:39:56.854892 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:39:56.855381 systemd-logind[1177]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:39:56.856559 systemd-logind[1177]: Removed session 12. Nov 1 00:40:01.864060 systemd[1]: Started sshd@13-146.190.139.75:22-139.178.89.65:38998.service. Nov 1 00:40:01.941520 sshd[3348]: Accepted publickey for core from 139.178.89.65 port 38998 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:01.948232 sshd[3348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:01.963280 systemd-logind[1177]: New session 13 of user core. Nov 1 00:40:01.964930 systemd[1]: Started session-13.scope. Nov 1 00:40:02.208504 sshd[3348]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:02.215607 systemd[1]: sshd@13-146.190.139.75:22-139.178.89.65:38998.service: Deactivated successfully. Nov 1 00:40:02.217215 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:40:02.218771 systemd-logind[1177]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:40:02.220274 systemd-logind[1177]: Removed session 13. Nov 1 00:40:04.847231 kubelet[1883]: E1101 00:40:04.847174 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:07.217063 systemd[1]: Started sshd@14-146.190.139.75:22-139.178.89.65:43692.service. Nov 1 00:40:07.278365 sshd[3360]: Accepted publickey for core from 139.178.89.65 port 43692 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:07.281263 sshd[3360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:07.289469 systemd[1]: Started session-14.scope. Nov 1 00:40:07.290238 systemd-logind[1177]: New session 14 of user core. Nov 1 00:40:07.464544 sshd[3360]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:07.469929 systemd[1]: sshd@14-146.190.139.75:22-139.178.89.65:43692.service: Deactivated successfully. Nov 1 00:40:07.471342 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:40:07.473194 systemd-logind[1177]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:40:07.474647 systemd[1]: Started sshd@15-146.190.139.75:22-139.178.89.65:43700.service. Nov 1 00:40:07.475892 systemd-logind[1177]: Removed session 14. Nov 1 00:40:07.529657 sshd[3372]: Accepted publickey for core from 139.178.89.65 port 43700 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:07.530418 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:07.536570 systemd-logind[1177]: New session 15 of user core. Nov 1 00:40:07.537258 systemd[1]: Started session-15.scope. Nov 1 00:40:07.877581 sshd[3372]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:07.885890 systemd[1]: Started sshd@16-146.190.139.75:22-139.178.89.65:43710.service. Nov 1 00:40:07.886842 systemd[1]: sshd@15-146.190.139.75:22-139.178.89.65:43700.service: Deactivated successfully. Nov 1 00:40:07.888501 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:40:07.889493 systemd-logind[1177]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:40:07.891344 systemd-logind[1177]: Removed session 15. Nov 1 00:40:07.964848 sshd[3381]: Accepted publickey for core from 139.178.89.65 port 43710 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:07.967782 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:07.974705 systemd-logind[1177]: New session 16 of user core. Nov 1 00:40:07.975056 systemd[1]: Started session-16.scope. Nov 1 00:40:08.822895 sshd[3381]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:08.838122 systemd[1]: Started sshd@17-146.190.139.75:22-139.178.89.65:43712.service. Nov 1 00:40:08.839431 systemd[1]: sshd@16-146.190.139.75:22-139.178.89.65:43710.service: Deactivated successfully. Nov 1 00:40:08.840595 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:40:08.847381 systemd-logind[1177]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:40:08.849519 systemd-logind[1177]: Removed session 16. Nov 1 00:40:08.912474 sshd[3397]: Accepted publickey for core from 139.178.89.65 port 43712 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:08.914807 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:08.922766 systemd[1]: Started session-17.scope. Nov 1 00:40:08.923356 systemd-logind[1177]: New session 17 of user core. Nov 1 00:40:09.365664 sshd[3397]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:09.371145 systemd-logind[1177]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:40:09.371671 systemd[1]: sshd@17-146.190.139.75:22-139.178.89.65:43712.service: Deactivated successfully. Nov 1 00:40:09.373518 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:40:09.378011 systemd-logind[1177]: Removed session 17. Nov 1 00:40:09.382530 systemd[1]: Started sshd@18-146.190.139.75:22-139.178.89.65:43720.service. Nov 1 00:40:09.490339 sshd[3410]: Accepted publickey for core from 139.178.89.65 port 43720 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:09.492957 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:09.501022 systemd[1]: Started session-18.scope. Nov 1 00:40:09.501912 systemd-logind[1177]: New session 18 of user core. Nov 1 00:40:09.662468 sshd[3410]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:09.666978 systemd[1]: sshd@18-146.190.139.75:22-139.178.89.65:43720.service: Deactivated successfully. Nov 1 00:40:09.668207 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:40:09.670040 systemd-logind[1177]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:40:09.671593 systemd-logind[1177]: Removed session 18. Nov 1 00:40:14.674431 systemd[1]: Started sshd@19-146.190.139.75:22-139.178.89.65:43724.service. Nov 1 00:40:14.742347 sshd[3421]: Accepted publickey for core from 139.178.89.65 port 43724 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:14.744938 sshd[3421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:14.752992 systemd[1]: Started session-19.scope. Nov 1 00:40:14.755170 systemd-logind[1177]: New session 19 of user core. Nov 1 00:40:14.912890 sshd[3421]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:14.916598 systemd-logind[1177]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:40:14.918447 systemd[1]: sshd@19-146.190.139.75:22-139.178.89.65:43724.service: Deactivated successfully. Nov 1 00:40:14.919503 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:40:14.921375 systemd-logind[1177]: Removed session 19. Nov 1 00:40:19.922023 systemd[1]: Started sshd@20-146.190.139.75:22-139.178.89.65:49058.service. Nov 1 00:40:19.976881 sshd[3434]: Accepted publickey for core from 139.178.89.65 port 49058 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:19.979144 sshd[3434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:19.987794 systemd-logind[1177]: New session 20 of user core. Nov 1 00:40:19.988278 systemd[1]: Started session-20.scope. Nov 1 00:40:20.152913 sshd[3434]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:20.156579 systemd-logind[1177]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:40:20.157019 systemd[1]: sshd@20-146.190.139.75:22-139.178.89.65:49058.service: Deactivated successfully. Nov 1 00:40:20.158063 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:40:20.159333 systemd-logind[1177]: Removed session 20. Nov 1 00:40:23.847228 kubelet[1883]: E1101 00:40:23.847167 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:23.848190 kubelet[1883]: E1101 00:40:23.848160 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:25.162187 systemd[1]: Started sshd@21-146.190.139.75:22-139.178.89.65:49062.service. Nov 1 00:40:25.218681 sshd[3446]: Accepted publickey for core from 139.178.89.65 port 49062 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:25.220147 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:25.227083 systemd[1]: Started session-21.scope. Nov 1 00:40:25.227445 systemd-logind[1177]: New session 21 of user core. Nov 1 00:40:25.360860 sshd[3446]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:25.364489 systemd[1]: sshd@21-146.190.139.75:22-139.178.89.65:49062.service: Deactivated successfully. Nov 1 00:40:25.365313 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:40:25.366312 systemd-logind[1177]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:40:25.367247 systemd-logind[1177]: Removed session 21. Nov 1 00:40:28.846832 kubelet[1883]: E1101 00:40:28.846775 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:29.846552 kubelet[1883]: E1101 00:40:29.846493 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:30.369811 systemd[1]: Started sshd@22-146.190.139.75:22-139.178.89.65:58274.service. Nov 1 00:40:30.420388 sshd[3458]: Accepted publickey for core from 139.178.89.65 port 58274 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:30.422974 sshd[3458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:30.429819 systemd-logind[1177]: New session 22 of user core. Nov 1 00:40:30.430007 systemd[1]: Started session-22.scope. Nov 1 00:40:30.582371 sshd[3458]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:30.585738 systemd-logind[1177]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:40:30.586021 systemd[1]: sshd@22-146.190.139.75:22-139.178.89.65:58274.service: Deactivated successfully. Nov 1 00:40:30.586842 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:40:30.588018 systemd-logind[1177]: Removed session 22. Nov 1 00:40:32.847097 kubelet[1883]: E1101 00:40:32.847041 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:35.590470 systemd[1]: Started sshd@23-146.190.139.75:22-139.178.89.65:58284.service. Nov 1 00:40:35.642205 sshd[3472]: Accepted publickey for core from 139.178.89.65 port 58284 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:35.644587 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:35.650815 systemd[1]: Started session-23.scope. Nov 1 00:40:35.651191 systemd-logind[1177]: New session 23 of user core. Nov 1 00:40:35.792722 sshd[3472]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:35.797373 systemd[1]: sshd@23-146.190.139.75:22-139.178.89.65:58284.service: Deactivated successfully. Nov 1 00:40:35.798230 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:40:35.798720 systemd-logind[1177]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:40:35.799647 systemd-logind[1177]: Removed session 23. Nov 1 00:40:39.847221 kubelet[1883]: E1101 00:40:39.847174 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:40.801513 systemd[1]: Started sshd@24-146.190.139.75:22-139.178.89.65:47466.service. Nov 1 00:40:40.857818 sshd[3485]: Accepted publickey for core from 139.178.89.65 port 47466 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:40.860118 sshd[3485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:40.868325 systemd[1]: Started session-24.scope. Nov 1 00:40:40.868909 systemd-logind[1177]: New session 24 of user core. Nov 1 00:40:41.036299 sshd[3485]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:41.041566 systemd[1]: sshd@24-146.190.139.75:22-139.178.89.65:47466.service: Deactivated successfully. Nov 1 00:40:41.042298 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:40:41.044075 systemd-logind[1177]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:40:41.047717 systemd[1]: Started sshd@25-146.190.139.75:22-139.178.89.65:47480.service. Nov 1 00:40:41.048730 systemd-logind[1177]: Removed session 24. Nov 1 00:40:41.116540 sshd[3496]: Accepted publickey for core from 139.178.89.65 port 47480 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:41.117453 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:41.125177 systemd[1]: Started session-25.scope. Nov 1 00:40:41.126761 systemd-logind[1177]: New session 25 of user core. Nov 1 00:40:42.847299 kubelet[1883]: E1101 00:40:42.847254 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:43.126410 systemd[1]: run-containerd-runc-k8s.io-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27-runc.NHN1gZ.mount: Deactivated successfully. Nov 1 00:40:43.153197 env[1183]: time="2025-11-01T00:40:43.153142975Z" level=info msg="StopContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" with timeout 30 (s)" Nov 1 00:40:43.154118 env[1183]: time="2025-11-01T00:40:43.154078120Z" level=info msg="Stop container \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" with signal terminated" Nov 1 00:40:43.154988 env[1183]: time="2025-11-01T00:40:43.154928254Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:40:43.165393 env[1183]: time="2025-11-01T00:40:43.165339479Z" level=info msg="StopContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" with timeout 2 (s)" Nov 1 00:40:43.165728 env[1183]: time="2025-11-01T00:40:43.165699204Z" level=info msg="Stop container \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" with signal terminated" Nov 1 00:40:43.169455 systemd[1]: cri-containerd-a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa.scope: Deactivated successfully. Nov 1 00:40:43.185955 systemd-networkd[997]: lxc_health: Link DOWN Nov 1 00:40:43.185967 systemd-networkd[997]: lxc_health: Lost carrier Nov 1 00:40:43.254916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa-rootfs.mount: Deactivated successfully. Nov 1 00:40:43.265859 env[1183]: time="2025-11-01T00:40:43.265782558Z" level=info msg="shim disconnected" id=a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa Nov 1 00:40:43.266234 env[1183]: time="2025-11-01T00:40:43.266204044Z" level=warning msg="cleaning up after shim disconnected" id=a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa namespace=k8s.io Nov 1 00:40:43.266343 env[1183]: time="2025-11-01T00:40:43.266321389Z" level=info msg="cleaning up dead shim" Nov 1 00:40:43.283205 systemd[1]: cri-containerd-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27.scope: Deactivated successfully. Nov 1 00:40:43.283671 systemd[1]: cri-containerd-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27.scope: Consumed 8.506s CPU time. Nov 1 00:40:43.286659 env[1183]: time="2025-11-01T00:40:43.286111712Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3550 runtime=io.containerd.runc.v2\n" Nov 1 00:40:43.292082 env[1183]: time="2025-11-01T00:40:43.292003121Z" level=info msg="StopContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" returns successfully" Nov 1 00:40:43.296008 env[1183]: time="2025-11-01T00:40:43.295932871Z" level=info msg="StopPodSandbox for \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\"" Nov 1 00:40:43.300750 env[1183]: time="2025-11-01T00:40:43.296038288Z" level=info msg="Container to stop \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.299259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8-shm.mount: Deactivated successfully. Nov 1 00:40:43.332321 systemd[1]: cri-containerd-5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8.scope: Deactivated successfully. Nov 1 00:40:43.359923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27-rootfs.mount: Deactivated successfully. Nov 1 00:40:43.373667 env[1183]: time="2025-11-01T00:40:43.372670674Z" level=info msg="shim disconnected" id=5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27 Nov 1 00:40:43.373667 env[1183]: time="2025-11-01T00:40:43.372743130Z" level=warning msg="cleaning up after shim disconnected" id=5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27 namespace=k8s.io Nov 1 00:40:43.373667 env[1183]: time="2025-11-01T00:40:43.372757065Z" level=info msg="cleaning up dead shim" Nov 1 00:40:43.407452 env[1183]: time="2025-11-01T00:40:43.407383946Z" level=info msg="shim disconnected" id=5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8 Nov 1 00:40:43.407764 env[1183]: time="2025-11-01T00:40:43.407475077Z" level=warning msg="cleaning up after shim disconnected" id=5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8 namespace=k8s.io Nov 1 00:40:43.407764 env[1183]: time="2025-11-01T00:40:43.407496678Z" level=info msg="cleaning up dead shim" Nov 1 00:40:43.420669 env[1183]: time="2025-11-01T00:40:43.418312268Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3587 runtime=io.containerd.runc.v2\n" Nov 1 00:40:43.424110 env[1183]: time="2025-11-01T00:40:43.424016662Z" level=info msg="StopContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" returns successfully" Nov 1 00:40:43.425172 env[1183]: time="2025-11-01T00:40:43.425121746Z" level=info msg="StopPodSandbox for \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\"" Nov 1 00:40:43.425330 env[1183]: time="2025-11-01T00:40:43.425216276Z" level=info msg="Container to stop \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.425330 env[1183]: time="2025-11-01T00:40:43.425240367Z" level=info msg="Container to stop \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.425330 env[1183]: time="2025-11-01T00:40:43.425264930Z" level=info msg="Container to stop \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.425330 env[1183]: time="2025-11-01T00:40:43.425282441Z" level=info msg="Container to stop \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.425330 env[1183]: time="2025-11-01T00:40:43.425300993Z" level=info msg="Container to stop \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:43.436352 env[1183]: time="2025-11-01T00:40:43.436286868Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3601 runtime=io.containerd.runc.v2\n" Nov 1 00:40:43.436807 env[1183]: time="2025-11-01T00:40:43.436762653Z" level=info msg="TearDown network for sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" successfully" Nov 1 00:40:43.436807 env[1183]: time="2025-11-01T00:40:43.436804798Z" level=info msg="StopPodSandbox for \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" returns successfully" Nov 1 00:40:43.447266 systemd[1]: cri-containerd-e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01.scope: Deactivated successfully. Nov 1 00:40:43.487448 env[1183]: time="2025-11-01T00:40:43.487370826Z" level=info msg="shim disconnected" id=e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01 Nov 1 00:40:43.487448 env[1183]: time="2025-11-01T00:40:43.487441017Z" level=warning msg="cleaning up after shim disconnected" id=e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01 namespace=k8s.io Nov 1 00:40:43.487448 env[1183]: time="2025-11-01T00:40:43.487455677Z" level=info msg="cleaning up dead shim" Nov 1 00:40:43.499511 env[1183]: time="2025-11-01T00:40:43.499451989Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3636 runtime=io.containerd.runc.v2\n" Nov 1 00:40:43.500145 env[1183]: time="2025-11-01T00:40:43.500107254Z" level=info msg="TearDown network for sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" successfully" Nov 1 00:40:43.500277 env[1183]: time="2025-11-01T00:40:43.500256178Z" level=info msg="StopPodSandbox for \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" returns successfully" Nov 1 00:40:43.586831 kubelet[1883]: I1101 00:40:43.586741 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-cilium-config-path\") pod \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\" (UID: \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\") " Nov 1 00:40:43.587249 kubelet[1883]: I1101 00:40:43.587212 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ec39fd0-dc62-4162-bce5-cc595ded4176-clustermesh-secrets\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.587398 kubelet[1883]: I1101 00:40:43.587379 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cni-path\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.587527 kubelet[1883]: I1101 00:40:43.587511 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-run\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.587659 kubelet[1883]: I1101 00:40:43.587635 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-cgroup\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.587798 kubelet[1883]: I1101 00:40:43.587777 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnvkz\" (UniqueName: \"kubernetes.io/projected/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-kube-api-access-xnvkz\") pod \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\" (UID: \"5485e9b1-7b73-471f-a2b5-fa031b8ca2cc\") " Nov 1 00:40:43.588111 kubelet[1883]: I1101 00:40:43.588091 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-hubble-tls\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588259 kubelet[1883]: I1101 00:40:43.588243 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-config-path\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588367 kubelet[1883]: I1101 00:40:43.588353 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-lib-modules\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588479 kubelet[1883]: I1101 00:40:43.588459 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-etc-cni-netd\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588591 kubelet[1883]: I1101 00:40:43.588576 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-kernel\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588771 kubelet[1883]: I1101 00:40:43.588755 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tqhj\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-kube-api-access-9tqhj\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588876 kubelet[1883]: I1101 00:40:43.588862 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-hostproc\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.588994 kubelet[1883]: I1101 00:40:43.588975 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-net\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.589127 kubelet[1883]: I1101 00:40:43.589110 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-bpf-maps\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.590051 kubelet[1883]: I1101 00:40:43.589329 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.591542 kubelet[1883]: I1101 00:40:43.588741 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5485e9b1-7b73-471f-a2b5-fa031b8ca2cc" (UID: "5485e9b1-7b73-471f-a2b5-fa031b8ca2cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:40:43.592022 kubelet[1883]: I1101 00:40:43.591993 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.592546 kubelet[1883]: I1101 00:40:43.592165 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cni-path" (OuterVolumeSpecName: "cni-path") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.592707 kubelet[1883]: I1101 00:40:43.592187 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.592831 kubelet[1883]: I1101 00:40:43.592206 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.592972 kubelet[1883]: I1101 00:40:43.592953 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.593122 kubelet[1883]: I1101 00:40:43.593102 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.594204 kubelet[1883]: I1101 00:40:43.594180 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-hostproc" (OuterVolumeSpecName: "hostproc") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.594347 kubelet[1883]: I1101 00:40:43.594329 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.596947 kubelet[1883]: I1101 00:40:43.596915 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:40:43.599986 kubelet[1883]: I1101 00:40:43.599906 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-kube-api-access-9tqhj" (OuterVolumeSpecName: "kube-api-access-9tqhj") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "kube-api-access-9tqhj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:40:43.600115 kubelet[1883]: I1101 00:40:43.600047 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-kube-api-access-xnvkz" (OuterVolumeSpecName: "kube-api-access-xnvkz") pod "5485e9b1-7b73-471f-a2b5-fa031b8ca2cc" (UID: "5485e9b1-7b73-471f-a2b5-fa031b8ca2cc"). InnerVolumeSpecName "kube-api-access-xnvkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:40:43.601217 kubelet[1883]: I1101 00:40:43.601174 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec39fd0-dc62-4162-bce5-cc595ded4176-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:40:43.603054 kubelet[1883]: I1101 00:40:43.603014 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690003 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-xtables-lock\") pod \"5ec39fd0-dc62-4162-bce5-cc595ded4176\" (UID: \"5ec39fd0-dc62-4162-bce5-cc595ded4176\") " Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690768 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-config-path\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690798 1883 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-hubble-tls\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690812 1883 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-lib-modules\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690832 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690848 1883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tqhj\" (UniqueName: \"kubernetes.io/projected/5ec39fd0-dc62-4162-bce5-cc595ded4176-kube-api-access-9tqhj\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691084 kubelet[1883]: I1101 00:40:43.690865 1883 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-etc-cni-netd\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690885 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-host-proc-sys-net\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690905 1883 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-hostproc\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690918 1883 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-bpf-maps\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690931 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-cilium-config-path\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690952 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-run\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690961 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cilium-cgroup\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690976 1883 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ec39fd0-dc62-4162-bce5-cc595ded4176-clustermesh-secrets\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.691566 kubelet[1883]: I1101 00:40:43.690985 1883 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-cni-path\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.692008 kubelet[1883]: I1101 00:40:43.690994 1883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnvkz\" (UniqueName: \"kubernetes.io/projected/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc-kube-api-access-xnvkz\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:43.692008 kubelet[1883]: I1101 00:40:43.690186 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5ec39fd0-dc62-4162-bce5-cc595ded4176" (UID: "5ec39fd0-dc62-4162-bce5-cc595ded4176"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:43.792282 kubelet[1883]: I1101 00:40:43.792216 1883 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ec39fd0-dc62-4162-bce5-cc595ded4176-xtables-lock\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:44.114263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8-rootfs.mount: Deactivated successfully. Nov 1 00:40:44.114764 systemd[1]: var-lib-kubelet-pods-5485e9b1\x2d7b73\x2d471f\x2da2b5\x2dfa031b8ca2cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxnvkz.mount: Deactivated successfully. Nov 1 00:40:44.114997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01-rootfs.mount: Deactivated successfully. Nov 1 00:40:44.115156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01-shm.mount: Deactivated successfully. Nov 1 00:40:44.115298 systemd[1]: var-lib-kubelet-pods-5ec39fd0\x2ddc62\x2d4162\x2dbce5\x2dcc595ded4176-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9tqhj.mount: Deactivated successfully. Nov 1 00:40:44.115449 systemd[1]: var-lib-kubelet-pods-5ec39fd0\x2ddc62\x2d4162\x2dbce5\x2dcc595ded4176-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:40:44.115673 systemd[1]: var-lib-kubelet-pods-5ec39fd0\x2ddc62\x2d4162\x2dbce5\x2dcc595ded4176-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:40:44.275780 systemd[1]: Removed slice kubepods-burstable-pod5ec39fd0_dc62_4162_bce5_cc595ded4176.slice. Nov 1 00:40:44.275961 systemd[1]: kubepods-burstable-pod5ec39fd0_dc62_4162_bce5_cc595ded4176.slice: Consumed 8.649s CPU time. Nov 1 00:40:44.291859 kubelet[1883]: I1101 00:40:44.291726 1883 scope.go:117] "RemoveContainer" containerID="5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27" Nov 1 00:40:44.299901 systemd[1]: Removed slice kubepods-besteffort-pod5485e9b1_7b73_471f_a2b5_fa031b8ca2cc.slice. Nov 1 00:40:44.300466 env[1183]: time="2025-11-01T00:40:44.300030352Z" level=info msg="RemoveContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\"" Nov 1 00:40:44.314880 env[1183]: time="2025-11-01T00:40:44.314782465Z" level=info msg="RemoveContainer for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" returns successfully" Nov 1 00:40:44.315899 kubelet[1883]: I1101 00:40:44.315833 1883 scope.go:117] "RemoveContainer" containerID="f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e" Nov 1 00:40:44.320137 env[1183]: time="2025-11-01T00:40:44.320065778Z" level=info msg="RemoveContainer for \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\"" Nov 1 00:40:44.325461 env[1183]: time="2025-11-01T00:40:44.325359904Z" level=info msg="RemoveContainer for \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\" returns successfully" Nov 1 00:40:44.327378 kubelet[1883]: I1101 00:40:44.327306 1883 scope.go:117] "RemoveContainer" containerID="0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee" Nov 1 00:40:44.329797 env[1183]: time="2025-11-01T00:40:44.329710480Z" level=info msg="RemoveContainer for \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\"" Nov 1 00:40:44.340105 env[1183]: time="2025-11-01T00:40:44.340043619Z" level=info msg="RemoveContainer for \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\" returns successfully" Nov 1 00:40:44.340850 kubelet[1883]: I1101 00:40:44.340804 1883 scope.go:117] "RemoveContainer" containerID="a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6" Nov 1 00:40:44.342609 env[1183]: time="2025-11-01T00:40:44.342540330Z" level=info msg="RemoveContainer for \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\"" Nov 1 00:40:44.345909 env[1183]: time="2025-11-01T00:40:44.345817997Z" level=info msg="RemoveContainer for \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\" returns successfully" Nov 1 00:40:44.346106 kubelet[1883]: I1101 00:40:44.346071 1883 scope.go:117] "RemoveContainer" containerID="7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5" Nov 1 00:40:44.347477 env[1183]: time="2025-11-01T00:40:44.347436905Z" level=info msg="RemoveContainer for \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\"" Nov 1 00:40:44.350603 env[1183]: time="2025-11-01T00:40:44.350531240Z" level=info msg="RemoveContainer for \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\" returns successfully" Nov 1 00:40:44.351082 kubelet[1883]: I1101 00:40:44.351030 1883 scope.go:117] "RemoveContainer" containerID="5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27" Nov 1 00:40:44.351800 env[1183]: time="2025-11-01T00:40:44.351531867Z" level=error msg="ContainerStatus for \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\": not found" Nov 1 00:40:44.352014 kubelet[1883]: E1101 00:40:44.351903 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\": not found" containerID="5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27" Nov 1 00:40:44.354103 kubelet[1883]: I1101 00:40:44.353924 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27"} err="failed to get container status \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ba4ed1fed9df7befe8fc8e8d557f750a6ec7e1dfbcb06bc72c45505977a0f27\": not found" Nov 1 00:40:44.354103 kubelet[1883]: I1101 00:40:44.354109 1883 scope.go:117] "RemoveContainer" containerID="f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e" Nov 1 00:40:44.354513 env[1183]: time="2025-11-01T00:40:44.354435301Z" level=error msg="ContainerStatus for \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\": not found" Nov 1 00:40:44.354668 kubelet[1883]: E1101 00:40:44.354643 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\": not found" containerID="f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e" Nov 1 00:40:44.354732 kubelet[1883]: I1101 00:40:44.354674 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e"} err="failed to get container status \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1389c19661e5cf351939d698af44d7a3feefd6a17f2bf99c87827981ba8277e\": not found" Nov 1 00:40:44.354732 kubelet[1883]: I1101 00:40:44.354698 1883 scope.go:117] "RemoveContainer" containerID="0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee" Nov 1 00:40:44.355258 env[1183]: time="2025-11-01T00:40:44.354977511Z" level=error msg="ContainerStatus for \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\": not found" Nov 1 00:40:44.355421 kubelet[1883]: E1101 00:40:44.355394 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\": not found" containerID="0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee" Nov 1 00:40:44.355590 kubelet[1883]: I1101 00:40:44.355543 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee"} err="failed to get container status \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"0aeb264760953fdc8af3b9137349120050bec13a92bd2f3a1c52cf8d5fa2a6ee\": not found" Nov 1 00:40:44.355710 kubelet[1883]: I1101 00:40:44.355693 1883 scope.go:117] "RemoveContainer" containerID="a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6" Nov 1 00:40:44.356138 env[1183]: time="2025-11-01T00:40:44.356041967Z" level=error msg="ContainerStatus for \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\": not found" Nov 1 00:40:44.356315 kubelet[1883]: E1101 00:40:44.356256 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\": not found" containerID="a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6" Nov 1 00:40:44.356315 kubelet[1883]: I1101 00:40:44.356300 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6"} err="failed to get container status \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a66877fb8d20c92f0ef5fe43d869fce37c9853bd3b98a83e2b2352a02a117df6\": not found" Nov 1 00:40:44.356425 kubelet[1883]: I1101 00:40:44.356319 1883 scope.go:117] "RemoveContainer" containerID="7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5" Nov 1 00:40:44.356579 env[1183]: time="2025-11-01T00:40:44.356507581Z" level=error msg="ContainerStatus for \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\": not found" Nov 1 00:40:44.356706 kubelet[1883]: E1101 00:40:44.356672 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\": not found" containerID="7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5" Nov 1 00:40:44.356766 kubelet[1883]: I1101 00:40:44.356707 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5"} err="failed to get container status \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"7764e61917254604b3f3ca8b682aecb44431882b2f6f6f6a7774c25b2b3579b5\": not found" Nov 1 00:40:44.356766 kubelet[1883]: I1101 00:40:44.356723 1883 scope.go:117] "RemoveContainer" containerID="a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa" Nov 1 00:40:44.358222 env[1183]: time="2025-11-01T00:40:44.358189927Z" level=info msg="RemoveContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\"" Nov 1 00:40:44.361312 env[1183]: time="2025-11-01T00:40:44.361262719Z" level=info msg="RemoveContainer for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" returns successfully" Nov 1 00:40:44.361535 kubelet[1883]: I1101 00:40:44.361498 1883 scope.go:117] "RemoveContainer" containerID="a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa" Nov 1 00:40:44.361893 env[1183]: time="2025-11-01T00:40:44.361817205Z" level=error msg="ContainerStatus for \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\": not found" Nov 1 00:40:44.362061 kubelet[1883]: E1101 00:40:44.362026 1883 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\": not found" containerID="a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa" Nov 1 00:40:44.362115 kubelet[1883]: I1101 00:40:44.362057 1883 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa"} err="failed to get container status \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a39220a88c9db7cd00638f5c3a045f2887a0fc8714e48eeab83ec03ecfa60caa\": not found" Nov 1 00:40:44.848740 kubelet[1883]: I1101 00:40:44.848690 1883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5485e9b1-7b73-471f-a2b5-fa031b8ca2cc" path="/var/lib/kubelet/pods/5485e9b1-7b73-471f-a2b5-fa031b8ca2cc/volumes" Nov 1 00:40:44.849213 kubelet[1883]: I1101 00:40:44.849185 1883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec39fd0-dc62-4162-bce5-cc595ded4176" path="/var/lib/kubelet/pods/5ec39fd0-dc62-4162-bce5-cc595ded4176/volumes" Nov 1 00:40:45.042125 sshd[3496]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:45.048918 systemd[1]: Started sshd@26-146.190.139.75:22-139.178.89.65:47494.service. Nov 1 00:40:45.053801 systemd-logind[1177]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:40:45.055209 systemd[1]: sshd@25-146.190.139.75:22-139.178.89.65:47480.service: Deactivated successfully. Nov 1 00:40:45.056216 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:40:45.056377 systemd[1]: session-25.scope: Consumed 1.233s CPU time. Nov 1 00:40:45.059283 systemd-logind[1177]: Removed session 25. Nov 1 00:40:45.111101 sshd[3654]: Accepted publickey for core from 139.178.89.65 port 47494 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:45.112563 sshd[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:45.119797 systemd[1]: Started session-26.scope. Nov 1 00:40:45.121239 systemd-logind[1177]: New session 26 of user core. Nov 1 00:40:45.845428 sshd[3654]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:45.852793 systemd[1]: sshd@26-146.190.139.75:22-139.178.89.65:47494.service: Deactivated successfully. Nov 1 00:40:45.853637 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:40:45.854709 systemd-logind[1177]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:40:45.856483 systemd[1]: Started sshd@27-146.190.139.75:22-139.178.89.65:47498.service. Nov 1 00:40:45.859055 systemd-logind[1177]: Removed session 26. Nov 1 00:40:45.900808 kubelet[1883]: I1101 00:40:45.900744 1883 memory_manager.go:355] "RemoveStaleState removing state" podUID="5485e9b1-7b73-471f-a2b5-fa031b8ca2cc" containerName="cilium-operator" Nov 1 00:40:45.901347 kubelet[1883]: I1101 00:40:45.901328 1883 memory_manager.go:355] "RemoveStaleState removing state" podUID="5ec39fd0-dc62-4162-bce5-cc595ded4176" containerName="cilium-agent" Nov 1 00:40:45.912954 sshd[3665]: Accepted publickey for core from 139.178.89.65 port 47498 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:45.916210 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:45.927813 systemd[1]: Started session-27.scope. Nov 1 00:40:45.929715 systemd-logind[1177]: New session 27 of user core. Nov 1 00:40:45.949716 systemd[1]: Created slice kubepods-burstable-podc1f42915_5501_4f2f_b1b8_6ef21025469b.slice. Nov 1 00:40:46.019609 kubelet[1883]: I1101 00:40:46.019497 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-hostproc\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.020059 kubelet[1883]: I1101 00:40:46.020012 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-etc-cni-netd\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.020272 kubelet[1883]: I1101 00:40:46.020251 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-xtables-lock\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.020447 kubelet[1883]: I1101 00:40:46.020426 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-kernel\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.020697 kubelet[1883]: I1101 00:40:46.020613 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-net\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.020850 kubelet[1883]: I1101 00:40:46.020826 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-cgroup\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021035 kubelet[1883]: I1101 00:40:46.021012 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-config-path\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021218 kubelet[1883]: I1101 00:40:46.021186 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq56b\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-kube-api-access-gq56b\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021393 kubelet[1883]: I1101 00:40:46.021372 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-lib-modules\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021581 kubelet[1883]: I1101 00:40:46.021553 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-run\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021812 kubelet[1883]: I1101 00:40:46.021791 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-ipsec-secrets\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.021986 kubelet[1883]: I1101 00:40:46.021965 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cni-path\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.022183 kubelet[1883]: I1101 00:40:46.022161 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-clustermesh-secrets\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.022365 kubelet[1883]: I1101 00:40:46.022344 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-bpf-maps\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.022529 kubelet[1883]: I1101 00:40:46.022509 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-hubble-tls\") pod \"cilium-4bkwv\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " pod="kube-system/cilium-4bkwv" Nov 1 00:40:46.170977 sshd[3665]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:46.179011 systemd[1]: Started sshd@28-146.190.139.75:22-139.178.89.65:36050.service. Nov 1 00:40:46.184085 systemd[1]: sshd@27-146.190.139.75:22-139.178.89.65:47498.service: Deactivated successfully. Nov 1 00:40:46.185024 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:40:46.186800 systemd-logind[1177]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:40:46.188076 systemd-logind[1177]: Removed session 27. Nov 1 00:40:46.193041 kubelet[1883]: E1101 00:40:46.193001 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:46.193814 env[1183]: time="2025-11-01T00:40:46.193758621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bkwv,Uid:c1f42915-5501-4f2f-b1b8-6ef21025469b,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:46.236463 env[1183]: time="2025-11-01T00:40:46.235061710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:46.236463 env[1183]: time="2025-11-01T00:40:46.235114595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:46.236463 env[1183]: time="2025-11-01T00:40:46.235205397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:46.236463 env[1183]: time="2025-11-01T00:40:46.235345048Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c pid=3690 runtime=io.containerd.runc.v2 Nov 1 00:40:46.250719 sshd[3680]: Accepted publickey for core from 139.178.89.65 port 36050 ssh2: RSA SHA256:qTXCs2mptgHeumafEfvg1OWutmYgYP0XSB3zx5Iy+CM Nov 1 00:40:46.250530 sshd[3680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:40:46.259453 systemd[1]: Started session-28.scope. Nov 1 00:40:46.265817 systemd-logind[1177]: New session 28 of user core. Nov 1 00:40:46.277587 systemd[1]: Started cri-containerd-dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c.scope. Nov 1 00:40:46.312407 env[1183]: time="2025-11-01T00:40:46.312346281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4bkwv,Uid:c1f42915-5501-4f2f-b1b8-6ef21025469b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\"" Nov 1 00:40:46.313250 kubelet[1883]: E1101 00:40:46.313218 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:46.318179 env[1183]: time="2025-11-01T00:40:46.318129322Z" level=info msg="CreateContainer within sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:40:46.348293 env[1183]: time="2025-11-01T00:40:46.348189830Z" level=info msg="CreateContainer within sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\"" Nov 1 00:40:46.351418 env[1183]: time="2025-11-01T00:40:46.351341751Z" level=info msg="StartContainer for \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\"" Nov 1 00:40:46.373549 systemd[1]: Started cri-containerd-beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d.scope. Nov 1 00:40:46.401449 systemd[1]: cri-containerd-beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d.scope: Deactivated successfully. Nov 1 00:40:46.421300 env[1183]: time="2025-11-01T00:40:46.421161700Z" level=info msg="shim disconnected" id=beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d Nov 1 00:40:46.421707 env[1183]: time="2025-11-01T00:40:46.421678594Z" level=warning msg="cleaning up after shim disconnected" id=beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d namespace=k8s.io Nov 1 00:40:46.421816 env[1183]: time="2025-11-01T00:40:46.421798667Z" level=info msg="cleaning up dead shim" Nov 1 00:40:46.437241 env[1183]: time="2025-11-01T00:40:46.437161634Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:40:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Nov 1 00:40:46.438024 env[1183]: time="2025-11-01T00:40:46.437883751Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Nov 1 00:40:46.438759 env[1183]: time="2025-11-01T00:40:46.438705235Z" level=error msg="Failed to pipe stderr of container \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\"" error="reading from a closed fifo" Nov 1 00:40:46.438938 env[1183]: time="2025-11-01T00:40:46.438776064Z" level=error msg="Failed to pipe stdout of container \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\"" error="reading from a closed fifo" Nov 1 00:40:46.441741 env[1183]: time="2025-11-01T00:40:46.441611679Z" level=error msg="StartContainer for \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Nov 1 00:40:46.441990 kubelet[1883]: E1101 00:40:46.441945 1883 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d" Nov 1 00:40:46.448441 kubelet[1883]: E1101 00:40:46.447683 1883 kuberuntime_manager.go:1341] "Unhandled Error" err=< Nov 1 00:40:46.448441 kubelet[1883]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Nov 1 00:40:46.448441 kubelet[1883]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Nov 1 00:40:46.448441 kubelet[1883]: rm /hostbin/cilium-mount Nov 1 00:40:46.448828 kubelet[1883]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gq56b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-4bkwv_kube-system(c1f42915-5501-4f2f-b1b8-6ef21025469b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Nov 1 00:40:46.448828 kubelet[1883]: > logger="UnhandledError" Nov 1 00:40:46.448828 kubelet[1883]: E1101 00:40:46.448801 1883 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-4bkwv" podUID="c1f42915-5501-4f2f-b1b8-6ef21025469b" Nov 1 00:40:46.966109 kubelet[1883]: E1101 00:40:46.966045 1883 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:40:47.307490 env[1183]: time="2025-11-01T00:40:47.304673008Z" level=info msg="StopPodSandbox for \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\"" Nov 1 00:40:47.307490 env[1183]: time="2025-11-01T00:40:47.304738993Z" level=info msg="Container to stop \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:40:47.307099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c-shm.mount: Deactivated successfully. Nov 1 00:40:47.319416 systemd[1]: cri-containerd-dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c.scope: Deactivated successfully. Nov 1 00:40:47.356583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c-rootfs.mount: Deactivated successfully. Nov 1 00:40:47.365182 env[1183]: time="2025-11-01T00:40:47.365090669Z" level=info msg="shim disconnected" id=dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c Nov 1 00:40:47.365182 env[1183]: time="2025-11-01T00:40:47.365166687Z" level=warning msg="cleaning up after shim disconnected" id=dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c namespace=k8s.io Nov 1 00:40:47.365182 env[1183]: time="2025-11-01T00:40:47.365181497Z" level=info msg="cleaning up dead shim" Nov 1 00:40:47.376249 env[1183]: time="2025-11-01T00:40:47.376186805Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3785 runtime=io.containerd.runc.v2\n" Nov 1 00:40:47.376898 env[1183]: time="2025-11-01T00:40:47.376846032Z" level=info msg="TearDown network for sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" successfully" Nov 1 00:40:47.377036 env[1183]: time="2025-11-01T00:40:47.377013642Z" level=info msg="StopPodSandbox for \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" returns successfully" Nov 1 00:40:47.433799 kubelet[1883]: I1101 00:40:47.433704 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-run\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.433826 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-hostproc\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.433909 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-cgroup\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.433938 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cni-path\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.433981 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-xtables-lock\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.434023 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq56b\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-kube-api-access-gq56b\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434105 kubelet[1883]: I1101 00:40:47.434077 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-hubble-tls\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434115 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-config-path\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434172 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-clustermesh-secrets\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434224 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-etc-cni-netd\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434251 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-kernel\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434313 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-lib-modules\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434383 kubelet[1883]: I1101 00:40:47.434342 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-net\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434606 kubelet[1883]: I1101 00:40:47.434368 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-bpf-maps\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.434606 kubelet[1883]: I1101 00:40:47.434420 1883 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-ipsec-secrets\") pod \"c1f42915-5501-4f2f-b1b8-6ef21025469b\" (UID: \"c1f42915-5501-4f2f-b1b8-6ef21025469b\") " Nov 1 00:40:47.436681 kubelet[1883]: I1101 00:40:47.436588 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.438081 kubelet[1883]: I1101 00:40:47.438027 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.438335 kubelet[1883]: I1101 00:40:47.438249 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.442013 systemd[1]: var-lib-kubelet-pods-c1f42915\x2d5501\x2d4f2f\x2db1b8\x2d6ef21025469b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:40:47.444484 kubelet[1883]: I1101 00:40:47.444442 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.444738 kubelet[1883]: I1101 00:40:47.444600 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.444923 kubelet[1883]: I1101 00:40:47.444897 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:40:47.445079 kubelet[1883]: I1101 00:40:47.445060 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.445191 kubelet[1883]: I1101 00:40:47.445174 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.445293 kubelet[1883]: I1101 00:40:47.445275 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.445525 kubelet[1883]: I1101 00:40:47.445481 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.445693 kubelet[1883]: I1101 00:40:47.445585 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:40:47.448282 kubelet[1883]: I1101 00:40:47.448231 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:40:47.453174 systemd[1]: var-lib-kubelet-pods-c1f42915\x2d5501\x2d4f2f\x2db1b8\x2d6ef21025469b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:40:47.455200 kubelet[1883]: I1101 00:40:47.455155 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-kube-api-access-gq56b" (OuterVolumeSpecName: "kube-api-access-gq56b") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "kube-api-access-gq56b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:40:47.456116 kubelet[1883]: I1101 00:40:47.456028 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:40:47.461439 kubelet[1883]: I1101 00:40:47.461387 1883 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c1f42915-5501-4f2f-b1b8-6ef21025469b" (UID: "c1f42915-5501-4f2f-b1b8-6ef21025469b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535087 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-config-path\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535137 1883 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-clustermesh-secrets\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535147 1883 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-etc-cni-netd\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535161 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535172 1883 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-lib-modules\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535158 kubelet[1883]: I1101 00:40:47.535181 1883 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-host-proc-sys-net\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535190 1883 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-bpf-maps\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535200 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535213 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-run\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535224 1883 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-hostproc\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535232 1883 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cilium-cgroup\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535241 1883 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-cni-path\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535251 1883 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gq56b\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-kube-api-access-gq56b\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535261 1883 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c1f42915-5501-4f2f-b1b8-6ef21025469b-hubble-tls\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:47.535837 kubelet[1883]: I1101 00:40:47.535270 1883 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1f42915-5501-4f2f-b1b8-6ef21025469b-xtables-lock\") on node \"ci-3510.3.8-n-368ce9a156\" DevicePath \"\"" Nov 1 00:40:48.129427 systemd[1]: var-lib-kubelet-pods-c1f42915\x2d5501\x2d4f2f\x2db1b8\x2d6ef21025469b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgq56b.mount: Deactivated successfully. Nov 1 00:40:48.129870 systemd[1]: var-lib-kubelet-pods-c1f42915\x2d5501\x2d4f2f\x2db1b8\x2d6ef21025469b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:40:48.308308 kubelet[1883]: I1101 00:40:48.308242 1883 scope.go:117] "RemoveContainer" containerID="beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d" Nov 1 00:40:48.311095 env[1183]: time="2025-11-01T00:40:48.310777310Z" level=info msg="RemoveContainer for \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\"" Nov 1 00:40:48.313731 systemd[1]: Removed slice kubepods-burstable-podc1f42915_5501_4f2f_b1b8_6ef21025469b.slice. Nov 1 00:40:48.316432 env[1183]: time="2025-11-01T00:40:48.316375795Z" level=info msg="RemoveContainer for \"beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d\" returns successfully" Nov 1 00:40:48.370567 kubelet[1883]: I1101 00:40:48.370487 1883 memory_manager.go:355] "RemoveStaleState removing state" podUID="c1f42915-5501-4f2f-b1b8-6ef21025469b" containerName="mount-cgroup" Nov 1 00:40:48.380671 systemd[1]: Created slice kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice. Nov 1 00:40:48.442980 kubelet[1883]: I1101 00:40:48.442903 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-xtables-lock\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.443358 kubelet[1883]: I1101 00:40:48.443321 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-host-proc-sys-net\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.443523 kubelet[1883]: I1101 00:40:48.443496 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be4ae3c0-e147-4dde-be36-a8092f00f10e-clustermesh-secrets\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.443700 kubelet[1883]: I1101 00:40:48.443666 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be4ae3c0-e147-4dde-be36-a8092f00f10e-cilium-config-path\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.443841 kubelet[1883]: I1101 00:40:48.443816 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkj8p\" (UniqueName: \"kubernetes.io/projected/be4ae3c0-e147-4dde-be36-a8092f00f10e-kube-api-access-gkj8p\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444060 kubelet[1883]: I1101 00:40:48.443948 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-cilium-run\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444302 kubelet[1883]: I1101 00:40:48.444275 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-cilium-cgroup\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444453 kubelet[1883]: I1101 00:40:48.444430 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/be4ae3c0-e147-4dde-be36-a8092f00f10e-cilium-ipsec-secrets\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444607 kubelet[1883]: I1101 00:40:48.444563 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-bpf-maps\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444805 kubelet[1883]: I1101 00:40:48.444782 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-cni-path\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.444926 kubelet[1883]: I1101 00:40:48.444904 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-lib-modules\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.445072 kubelet[1883]: I1101 00:40:48.445023 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-hostproc\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.445193 kubelet[1883]: I1101 00:40:48.445169 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-etc-cni-netd\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.445317 kubelet[1883]: I1101 00:40:48.445293 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be4ae3c0-e147-4dde-be36-a8092f00f10e-host-proc-sys-kernel\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.445450 kubelet[1883]: I1101 00:40:48.445426 1883 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be4ae3c0-e147-4dde-be36-a8092f00f10e-hubble-tls\") pod \"cilium-9b2w6\" (UID: \"be4ae3c0-e147-4dde-be36-a8092f00f10e\") " pod="kube-system/cilium-9b2w6" Nov 1 00:40:48.684136 kubelet[1883]: E1101 00:40:48.684061 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:48.685076 env[1183]: time="2025-11-01T00:40:48.684818192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b2w6,Uid:be4ae3c0-e147-4dde-be36-a8092f00f10e,Namespace:kube-system,Attempt:0,}" Nov 1 00:40:48.700174 env[1183]: time="2025-11-01T00:40:48.700037140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:40:48.700174 env[1183]: time="2025-11-01T00:40:48.700100670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:40:48.700500 env[1183]: time="2025-11-01T00:40:48.700127478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:40:48.700713 env[1183]: time="2025-11-01T00:40:48.700426503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829 pid=3814 runtime=io.containerd.runc.v2 Nov 1 00:40:48.715724 systemd[1]: Started cri-containerd-3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829.scope. Nov 1 00:40:48.756869 env[1183]: time="2025-11-01T00:40:48.756806053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9b2w6,Uid:be4ae3c0-e147-4dde-be36-a8092f00f10e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\"" Nov 1 00:40:48.758949 kubelet[1883]: E1101 00:40:48.758872 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:48.767702 env[1183]: time="2025-11-01T00:40:48.767608349Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:40:48.784653 env[1183]: time="2025-11-01T00:40:48.784547178Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e\"" Nov 1 00:40:48.785577 env[1183]: time="2025-11-01T00:40:48.785529608Z" level=info msg="StartContainer for \"9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e\"" Nov 1 00:40:48.814286 systemd[1]: Started cri-containerd-9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e.scope. Nov 1 00:40:48.850352 kubelet[1883]: I1101 00:40:48.850291 1883 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f42915-5501-4f2f-b1b8-6ef21025469b" path="/var/lib/kubelet/pods/c1f42915-5501-4f2f-b1b8-6ef21025469b/volumes" Nov 1 00:40:48.866073 env[1183]: time="2025-11-01T00:40:48.866009973Z" level=info msg="StartContainer for \"9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e\" returns successfully" Nov 1 00:40:48.923323 systemd[1]: cri-containerd-9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e.scope: Deactivated successfully. Nov 1 00:40:48.968977 env[1183]: time="2025-11-01T00:40:48.968810464Z" level=info msg="shim disconnected" id=9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e Nov 1 00:40:48.968977 env[1183]: time="2025-11-01T00:40:48.968877464Z" level=warning msg="cleaning up after shim disconnected" id=9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e namespace=k8s.io Nov 1 00:40:48.968977 env[1183]: time="2025-11-01T00:40:48.968892656Z" level=info msg="cleaning up dead shim" Nov 1 00:40:48.973484 kubelet[1883]: I1101 00:40:48.973398 1883 setters.go:602] "Node became not ready" node="ci-3510.3.8-n-368ce9a156" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:40:48Z","lastTransitionTime":"2025-11-01T00:40:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:40:48.995029 env[1183]: time="2025-11-01T00:40:48.994953713Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3897 runtime=io.containerd.runc.v2\n" Nov 1 00:40:49.315795 kubelet[1883]: E1101 00:40:49.315598 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:49.319183 env[1183]: time="2025-11-01T00:40:49.319129270Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:40:49.334341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762919850.mount: Deactivated successfully. Nov 1 00:40:49.346389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021242817.mount: Deactivated successfully. Nov 1 00:40:49.356641 env[1183]: time="2025-11-01T00:40:49.356541482Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a\"" Nov 1 00:40:49.357470 env[1183]: time="2025-11-01T00:40:49.357422849Z" level=info msg="StartContainer for \"334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a\"" Nov 1 00:40:49.378803 systemd[1]: Started cri-containerd-334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a.scope. Nov 1 00:40:49.416844 env[1183]: time="2025-11-01T00:40:49.416784737Z" level=info msg="StartContainer for \"334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a\" returns successfully" Nov 1 00:40:49.427684 systemd[1]: cri-containerd-334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a.scope: Deactivated successfully. Nov 1 00:40:49.456614 env[1183]: time="2025-11-01T00:40:49.456550230Z" level=info msg="shim disconnected" id=334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a Nov 1 00:40:49.456614 env[1183]: time="2025-11-01T00:40:49.456602819Z" level=warning msg="cleaning up after shim disconnected" id=334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a namespace=k8s.io Nov 1 00:40:49.456614 env[1183]: time="2025-11-01T00:40:49.456613220Z" level=info msg="cleaning up dead shim" Nov 1 00:40:49.466708 env[1183]: time="2025-11-01T00:40:49.466653702Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3960 runtime=io.containerd.runc.v2\n" Nov 1 00:40:49.530505 kubelet[1883]: W1101 00:40:49.527690 1883 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1f42915_5501_4f2f_b1b8_6ef21025469b.slice/cri-containerd-beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d.scope WatchSource:0}: container "beb17fc9bd3c323f7e0afe7d6712bb6fd6832378f97688a08f238dfd78cc943d" in namespace "k8s.io": not found Nov 1 00:40:50.319955 kubelet[1883]: E1101 00:40:50.319896 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:50.325776 env[1183]: time="2025-11-01T00:40:50.325724420Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:40:50.356853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847386840.mount: Deactivated successfully. Nov 1 00:40:50.371672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1711751492.mount: Deactivated successfully. Nov 1 00:40:50.418486 env[1183]: time="2025-11-01T00:40:50.376105918Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec\"" Nov 1 00:40:50.418486 env[1183]: time="2025-11-01T00:40:50.377002145Z" level=info msg="StartContainer for \"bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec\"" Nov 1 00:40:50.438512 systemd[1]: Started cri-containerd-bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec.scope. Nov 1 00:40:50.486731 env[1183]: time="2025-11-01T00:40:50.486653784Z" level=info msg="StartContainer for \"bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec\" returns successfully" Nov 1 00:40:50.490469 systemd[1]: cri-containerd-bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec.scope: Deactivated successfully. Nov 1 00:40:50.526250 env[1183]: time="2025-11-01T00:40:50.526179963Z" level=info msg="shim disconnected" id=bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec Nov 1 00:40:50.526250 env[1183]: time="2025-11-01T00:40:50.526230227Z" level=warning msg="cleaning up after shim disconnected" id=bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec namespace=k8s.io Nov 1 00:40:50.526250 env[1183]: time="2025-11-01T00:40:50.526240240Z" level=info msg="cleaning up dead shim" Nov 1 00:40:50.536479 env[1183]: time="2025-11-01T00:40:50.536414407Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4021 runtime=io.containerd.runc.v2\n" Nov 1 00:40:51.324161 kubelet[1883]: E1101 00:40:51.324062 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:51.327113 env[1183]: time="2025-11-01T00:40:51.327055348Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:40:51.353726 env[1183]: time="2025-11-01T00:40:51.353469709Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e\"" Nov 1 00:40:51.354672 env[1183]: time="2025-11-01T00:40:51.354568101Z" level=info msg="StartContainer for \"21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e\"" Nov 1 00:40:51.382517 systemd[1]: Started cri-containerd-21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e.scope. Nov 1 00:40:51.417308 systemd[1]: cri-containerd-21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e.scope: Deactivated successfully. Nov 1 00:40:51.420557 env[1183]: time="2025-11-01T00:40:51.420241103Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice/cri-containerd-21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e.scope/memory.events\": no such file or directory" Nov 1 00:40:51.425474 env[1183]: time="2025-11-01T00:40:51.425371528Z" level=info msg="StartContainer for \"21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e\" returns successfully" Nov 1 00:40:51.456610 env[1183]: time="2025-11-01T00:40:51.456548997Z" level=info msg="shim disconnected" id=21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e Nov 1 00:40:51.457039 env[1183]: time="2025-11-01T00:40:51.456998930Z" level=warning msg="cleaning up after shim disconnected" id=21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e namespace=k8s.io Nov 1 00:40:51.457162 env[1183]: time="2025-11-01T00:40:51.457134647Z" level=info msg="cleaning up dead shim" Nov 1 00:40:51.473141 env[1183]: time="2025-11-01T00:40:51.473082998Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:40:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4078 runtime=io.containerd.runc.v2\n" Nov 1 00:40:51.967730 kubelet[1883]: E1101 00:40:51.967659 1883 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:40:52.130400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e-rootfs.mount: Deactivated successfully. Nov 1 00:40:52.329590 kubelet[1883]: E1101 00:40:52.329457 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:52.334447 env[1183]: time="2025-11-01T00:40:52.334376028Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:40:52.366811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474309011.mount: Deactivated successfully. Nov 1 00:40:52.372211 env[1183]: time="2025-11-01T00:40:52.372134418Z" level=info msg="CreateContainer within sandbox \"3209c53cbd5d80a0bba2b25b65c06bbe3fb036485c7e1aa600497743684dc829\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a\"" Nov 1 00:40:52.373409 env[1183]: time="2025-11-01T00:40:52.373365815Z" level=info msg="StartContainer for \"b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a\"" Nov 1 00:40:52.402598 systemd[1]: Started cri-containerd-b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a.scope. Nov 1 00:40:52.453008 env[1183]: time="2025-11-01T00:40:52.452884160Z" level=info msg="StartContainer for \"b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a\" returns successfully" Nov 1 00:40:52.646445 kubelet[1883]: W1101 00:40:52.646284 1883 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice/cri-containerd-9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e.scope WatchSource:0}: task 9fb2888e9d7fcb1dc8acba0b0370ceb54c0ffe0f58fa8d914f41a2ebd615394e not found: not found Nov 1 00:40:52.980689 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 00:40:53.334258 kubelet[1883]: E1101 00:40:53.334113 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:54.686092 kubelet[1883]: E1101 00:40:54.686044 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:54.811170 systemd[1]: run-containerd-runc-k8s.io-b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a-runc.HZB25D.mount: Deactivated successfully. Nov 1 00:40:55.767162 kubelet[1883]: W1101 00:40:55.766436 1883 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice/cri-containerd-334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a.scope WatchSource:0}: task 334334f8f75bb22a0061128ccae8eec6299457015f1e1e8d729034dd66d9361a not found: not found Nov 1 00:40:56.252347 systemd-networkd[997]: lxc_health: Link UP Nov 1 00:40:56.259728 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:40:56.259437 systemd-networkd[997]: lxc_health: Gained carrier Nov 1 00:40:56.687279 kubelet[1883]: E1101 00:40:56.687236 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:56.729563 kubelet[1883]: I1101 00:40:56.729448 1883 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9b2w6" podStartSLOduration=8.729421428 podStartE2EDuration="8.729421428s" podCreationTimestamp="2025-11-01 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:40:53.361035174 +0000 UTC m=+116.780872765" watchObservedRunningTime="2025-11-01 00:40:56.729421428 +0000 UTC m=+120.149259042" Nov 1 00:40:56.798349 env[1183]: time="2025-11-01T00:40:56.798294275Z" level=info msg="StopPodSandbox for \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\"" Nov 1 00:40:56.798929 env[1183]: time="2025-11-01T00:40:56.798408494Z" level=info msg="TearDown network for sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" successfully" Nov 1 00:40:56.798929 env[1183]: time="2025-11-01T00:40:56.798443100Z" level=info msg="StopPodSandbox for \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" returns successfully" Nov 1 00:40:56.798929 env[1183]: time="2025-11-01T00:40:56.798806980Z" level=info msg="RemovePodSandbox for \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\"" Nov 1 00:40:56.798929 env[1183]: time="2025-11-01T00:40:56.798833686Z" level=info msg="Forcibly stopping sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\"" Nov 1 00:40:56.799113 env[1183]: time="2025-11-01T00:40:56.798929516Z" level=info msg="TearDown network for sandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" successfully" Nov 1 00:40:56.807308 env[1183]: time="2025-11-01T00:40:56.807243823Z" level=info msg="RemovePodSandbox \"5bd4edccf31a80f9714b6236668ff25ac46fb29e5f042794a1dec405a5fcd1e8\" returns successfully" Nov 1 00:40:56.807940 env[1183]: time="2025-11-01T00:40:56.807905946Z" level=info msg="StopPodSandbox for \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\"" Nov 1 00:40:56.808045 env[1183]: time="2025-11-01T00:40:56.808004109Z" level=info msg="TearDown network for sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" successfully" Nov 1 00:40:56.808113 env[1183]: time="2025-11-01T00:40:56.808042834Z" level=info msg="StopPodSandbox for \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" returns successfully" Nov 1 00:40:56.809825 env[1183]: time="2025-11-01T00:40:56.808392935Z" level=info msg="RemovePodSandbox for \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\"" Nov 1 00:40:56.809825 env[1183]: time="2025-11-01T00:40:56.808425472Z" level=info msg="Forcibly stopping sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\"" Nov 1 00:40:56.809825 env[1183]: time="2025-11-01T00:40:56.808496691Z" level=info msg="TearDown network for sandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" successfully" Nov 1 00:40:56.811404 env[1183]: time="2025-11-01T00:40:56.811283798Z" level=info msg="RemovePodSandbox \"dff049c58390571d7cfe132b1b56d52242a9454f974cccf13ad9eefac071fb4c\" returns successfully" Nov 1 00:40:56.811904 env[1183]: time="2025-11-01T00:40:56.811870595Z" level=info msg="StopPodSandbox for \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\"" Nov 1 00:40:56.812022 env[1183]: time="2025-11-01T00:40:56.811972544Z" level=info msg="TearDown network for sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" successfully" Nov 1 00:40:56.812092 env[1183]: time="2025-11-01T00:40:56.812020471Z" level=info msg="StopPodSandbox for \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" returns successfully" Nov 1 00:40:56.812354 env[1183]: time="2025-11-01T00:40:56.812324950Z" level=info msg="RemovePodSandbox for \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\"" Nov 1 00:40:56.812418 env[1183]: time="2025-11-01T00:40:56.812350484Z" level=info msg="Forcibly stopping sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\"" Nov 1 00:40:56.812452 env[1183]: time="2025-11-01T00:40:56.812416026Z" level=info msg="TearDown network for sandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" successfully" Nov 1 00:40:56.815447 env[1183]: time="2025-11-01T00:40:56.815384050Z" level=info msg="RemovePodSandbox \"e5061f9cd5cd0d21bdf7a4e6280c48a32b4dfc1e45154c0db416e45698117c01\" returns successfully" Nov 1 00:40:57.024290 systemd[1]: run-containerd-runc-k8s.io-b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a-runc.toEysl.mount: Deactivated successfully. Nov 1 00:40:57.345028 kubelet[1883]: E1101 00:40:57.344867 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:57.894869 systemd-networkd[997]: lxc_health: Gained IPv6LL Nov 1 00:40:58.346516 kubelet[1883]: E1101 00:40:58.346459 1883 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 1 00:40:58.877045 kubelet[1883]: W1101 00:40:58.876977 1883 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice/cri-containerd-bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec.scope WatchSource:0}: task bdc15c1808344cb1d9d46021c17a635b8956fe7a6ce30e442df02483c71fb7ec not found: not found Nov 1 00:40:59.234478 systemd[1]: run-containerd-runc-k8s.io-b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a-runc.qWln6C.mount: Deactivated successfully. Nov 1 00:41:01.420347 systemd[1]: run-containerd-runc-k8s.io-b4d4a37d8d2a22c6ef7ac32ff0d8579cdbbc01e81e25642789414e7c4643454a-runc.QjLYz8.mount: Deactivated successfully. Nov 1 00:41:01.544604 sshd[3680]: pam_unix(sshd:session): session closed for user core Nov 1 00:41:01.548810 systemd[1]: sshd@28-146.190.139.75:22-139.178.89.65:36050.service: Deactivated successfully. Nov 1 00:41:01.550185 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:41:01.550535 systemd-logind[1177]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:41:01.552293 systemd-logind[1177]: Removed session 28. Nov 1 00:41:01.989007 kubelet[1883]: W1101 00:41:01.988942 1883 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4ae3c0_e147_4dde_be36_a8092f00f10e.slice/cri-containerd-21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e.scope WatchSource:0}: task 21efd964add3dcd173248185fc1664678d363a2fd92716ab5075f16c0e66cd8e not found: not found Nov 1 00:41:04.383568 update_engine[1178]: I1101 00:41:04.383494 1178 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:41:04.383568 update_engine[1178]: I1101 00:41:04.383551 1178 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:41:04.386934 update_engine[1178]: I1101 00:41:04.386878 1178 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:41:04.387573 update_engine[1178]: I1101 00:41:04.387504 1178 omaha_request_params.cc:62] Current group set to lts Nov 1 00:41:04.391077 update_engine[1178]: I1101 00:41:04.391021 1178 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:41:04.391077 update_engine[1178]: I1101 00:41:04.391052 1178 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:41:04.391077 update_engine[1178]: I1101 00:41:04.391079 1178 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:41:04.393380 update_engine[1178]: I1101 00:41:04.393093 1178 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:41:04.393380 update_engine[1178]: I1101 00:41:04.393249 1178 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 00:41:04.393380 update_engine[1178]: I1101 00:41:04.393257 1178 omaha_request_action.cc:271] Request: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: Nov 1 00:41:04.393380 update_engine[1178]: I1101 00:41:04.393271 1178 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:41:04.403840 update_engine[1178]: I1101 00:41:04.403770 1178 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:41:04.404089 update_engine[1178]: E1101 00:41:04.403944 1178 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:41:04.404089 update_engine[1178]: I1101 00:41:04.404062 1178 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:41:04.418330 locksmithd[1220]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0