Nov 1 00:52:54.926127 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:52:54.926154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:52:54.926171 kernel: BIOS-provided physical RAM map: Nov 1 00:52:54.926182 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:52:54.926194 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:52:54.926205 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:52:54.926219 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 1 00:52:54.926230 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 1 00:52:54.926239 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:52:54.926246 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:52:54.926253 kernel: NX (Execute Disable) protection: active Nov 1 00:52:54.926260 kernel: SMBIOS 2.8 present. Nov 1 00:52:54.926266 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 1 00:52:54.926273 kernel: Hypervisor detected: KVM Nov 1 00:52:54.926282 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:52:54.926292 kernel: kvm-clock: cpu 0, msr 291a0001, primary cpu clock Nov 1 00:52:54.926300 kernel: kvm-clock: using sched offset of 3850305644 cycles Nov 1 00:52:54.926308 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:52:54.926320 kernel: tsc: Detected 1995.312 MHz processor Nov 1 00:52:54.926328 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:52:54.926336 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:52:54.926343 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 1 00:52:54.926351 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:52:54.926361 kernel: ACPI: Early table checksum verification disabled Nov 1 00:52:54.926368 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 1 00:52:54.926376 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926383 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926391 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926398 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 1 00:52:54.926405 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926413 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926420 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926430 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:52:54.926438 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 1 00:52:54.926445 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 1 00:52:54.926452 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 1 00:52:54.926460 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 1 00:52:54.926467 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 1 00:52:54.926474 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 1 00:52:54.926482 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 1 00:52:54.926496 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:52:54.926503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:52:54.926511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:52:54.926519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:52:54.926527 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 1 00:52:54.926535 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 1 00:52:54.926545 kernel: Zone ranges: Nov 1 00:52:54.926554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:52:54.926561 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 1 00:52:54.926569 kernel: Normal empty Nov 1 00:52:54.926577 kernel: Movable zone start for each node Nov 1 00:52:54.926585 kernel: Early memory node ranges Nov 1 00:52:54.926593 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:52:54.926601 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 1 00:52:54.926609 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 1 00:52:54.926619 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:52:54.926631 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:52:54.926639 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 1 00:52:54.926647 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:52:54.926655 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:52:54.926663 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:52:54.926671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:52:54.926679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:52:54.926687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:52:54.926697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:52:54.926709 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:52:54.926717 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:52:54.926725 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:52:54.926732 kernel: TSC deadline timer available Nov 1 00:52:54.926740 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:52:54.926762 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 1 00:52:54.926773 kernel: Booting paravirtualized kernel on KVM Nov 1 00:52:54.930298 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:52:54.930321 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:52:54.930331 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:52:54.930339 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:52:54.930347 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:52:54.930355 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Nov 1 00:52:54.930363 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:52:54.930371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 1 00:52:54.930379 kernel: Policy zone: DMA32 Nov 1 00:52:54.930389 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:52:54.930400 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:52:54.930408 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:52:54.930416 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:52:54.930424 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:52:54.930433 kernel: Memory: 1973276K/2096612K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 123076K reserved, 0K cma-reserved) Nov 1 00:52:54.930441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:52:54.930449 kernel: Kernel/User page tables isolation: enabled Nov 1 00:52:54.930457 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:52:54.930467 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:52:54.930475 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:52:54.930485 kernel: rcu: RCU event tracing is enabled. Nov 1 00:52:54.930493 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:52:54.930501 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:52:54.930509 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:52:54.930517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:52:54.930525 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:52:54.930533 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:52:54.930543 kernel: random: crng init done Nov 1 00:52:54.930551 kernel: Console: colour VGA+ 80x25 Nov 1 00:52:54.930559 kernel: printk: console [tty0] enabled Nov 1 00:52:54.930567 kernel: printk: console [ttyS0] enabled Nov 1 00:52:54.930575 kernel: ACPI: Core revision 20210730 Nov 1 00:52:54.930584 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:52:54.930592 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:52:54.930600 kernel: x2apic enabled Nov 1 00:52:54.930608 kernel: Switched APIC routing to physical x2apic. Nov 1 00:52:54.930616 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:52:54.930627 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 1 00:52:54.930635 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 1 00:52:54.930652 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:52:54.930660 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:52:54.930668 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:52:54.930676 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:52:54.930685 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:52:54.930693 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:52:54.930703 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:52:54.930718 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:52:54.930727 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:52:54.930737 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:52:54.930746 kernel: active return thunk: its_return_thunk Nov 1 00:52:54.930766 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:52:54.930775 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:52:54.930783 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:52:54.930791 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:52:54.930800 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:52:54.930811 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:52:54.930819 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:52:54.930827 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:52:54.930835 kernel: LSM: Security Framework initializing Nov 1 00:52:54.930844 kernel: SELinux: Initializing. Nov 1 00:52:54.930852 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:52:54.930861 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:52:54.930871 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 1 00:52:54.930880 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 1 00:52:54.930888 kernel: signal: max sigframe size: 1776 Nov 1 00:52:54.930896 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:52:54.930905 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:52:54.930913 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:52:54.930921 kernel: x86: Booting SMP configuration: Nov 1 00:52:54.930930 kernel: .... node #0, CPUs: #1 Nov 1 00:52:54.930938 kernel: kvm-clock: cpu 1, msr 291a0041, secondary cpu clock Nov 1 00:52:54.930948 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Nov 1 00:52:54.930956 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:52:54.930965 kernel: smpboot: Max logical packages: 1 Nov 1 00:52:54.930973 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 1 00:52:54.930981 kernel: devtmpfs: initialized Nov 1 00:52:54.930990 kernel: x86/mm: Memory block size: 128MB Nov 1 00:52:54.930998 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:52:54.931007 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:52:54.931015 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:52:54.931026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:52:54.931034 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:52:54.931042 kernel: audit: type=2000 audit(1761958373.226:1): state=initialized audit_enabled=0 res=1 Nov 1 00:52:54.931051 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:52:54.931059 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:52:54.931067 kernel: cpuidle: using governor menu Nov 1 00:52:54.931079 kernel: ACPI: bus type PCI registered Nov 1 00:52:54.931088 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:52:54.931096 kernel: dca service started, version 1.12.1 Nov 1 00:52:54.931106 kernel: PCI: Using configuration type 1 for base access Nov 1 00:52:54.931115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:52:54.931123 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:52:54.931131 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:52:54.931140 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:52:54.931148 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:52:54.931156 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:52:54.931165 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:52:54.931173 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:52:54.931184 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:52:54.931192 kernel: ACPI: Interpreter enabled Nov 1 00:52:54.931200 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:52:54.931208 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:52:54.931217 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:52:54.931226 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:52:54.931234 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:52:54.931449 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:52:54.931551 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:52:54.931564 kernel: acpiphp: Slot [3] registered Nov 1 00:52:54.931572 kernel: acpiphp: Slot [4] registered Nov 1 00:52:54.931580 kernel: acpiphp: Slot [5] registered Nov 1 00:52:54.931589 kernel: acpiphp: Slot [6] registered Nov 1 00:52:54.931597 kernel: acpiphp: Slot [7] registered Nov 1 00:52:54.931605 kernel: acpiphp: Slot [8] registered Nov 1 00:52:54.931613 kernel: acpiphp: Slot [9] registered Nov 1 00:52:54.931622 kernel: acpiphp: Slot [10] registered Nov 1 00:52:54.931632 kernel: acpiphp: Slot [11] registered Nov 1 00:52:54.931640 kernel: acpiphp: Slot [12] registered Nov 1 00:52:54.931649 kernel: acpiphp: Slot [13] registered Nov 1 00:52:54.931657 kernel: acpiphp: Slot [14] registered Nov 1 00:52:54.931665 kernel: acpiphp: Slot [15] registered Nov 1 00:52:54.931674 kernel: acpiphp: Slot [16] registered Nov 1 00:52:54.931682 kernel: acpiphp: Slot [17] registered Nov 1 00:52:54.931690 kernel: acpiphp: Slot [18] registered Nov 1 00:52:54.931699 kernel: acpiphp: Slot [19] registered Nov 1 00:52:54.931709 kernel: acpiphp: Slot [20] registered Nov 1 00:52:54.931717 kernel: acpiphp: Slot [21] registered Nov 1 00:52:54.931726 kernel: acpiphp: Slot [22] registered Nov 1 00:52:54.931734 kernel: acpiphp: Slot [23] registered Nov 1 00:52:54.931742 kernel: acpiphp: Slot [24] registered Nov 1 00:52:54.931761 kernel: acpiphp: Slot [25] registered Nov 1 00:52:54.931769 kernel: acpiphp: Slot [26] registered Nov 1 00:52:54.931777 kernel: acpiphp: Slot [27] registered Nov 1 00:52:54.931785 kernel: acpiphp: Slot [28] registered Nov 1 00:52:54.931793 kernel: acpiphp: Slot [29] registered Nov 1 00:52:54.931804 kernel: acpiphp: Slot [30] registered Nov 1 00:52:54.931812 kernel: acpiphp: Slot [31] registered Nov 1 00:52:54.931820 kernel: PCI host bridge to bus 0000:00 Nov 1 00:52:54.931934 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:52:54.932020 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:52:54.932102 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:52:54.932183 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:52:54.932267 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 1 00:52:54.932348 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:52:54.932459 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:52:54.932598 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:52:54.932714 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 1 00:52:54.936934 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 1 00:52:54.937050 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 1 00:52:54.937142 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 1 00:52:54.937233 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 1 00:52:54.937323 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 1 00:52:54.937429 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 1 00:52:54.937543 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 1 00:52:54.937647 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:52:54.937742 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 1 00:52:54.937845 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 1 00:52:54.937989 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:52:54.938085 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 1 00:52:54.938195 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 1 00:52:54.938286 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 1 00:52:54.938405 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 1 00:52:54.938501 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:52:54.938634 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:52:54.938767 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 1 00:52:54.938863 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 1 00:52:54.938953 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 1 00:52:54.939053 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:52:54.939148 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 1 00:52:54.939261 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 1 00:52:54.939352 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 1 00:52:54.939460 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 1 00:52:54.939552 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 1 00:52:54.939641 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 1 00:52:54.939730 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 1 00:52:54.939838 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:52:54.939932 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:52:54.940022 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 1 00:52:54.940111 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 1 00:52:54.940216 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:52:54.940307 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 1 00:52:54.940397 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 1 00:52:54.940505 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 1 00:52:54.940612 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 1 00:52:54.940705 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 1 00:52:54.940805 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 1 00:52:54.940816 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:52:54.940825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:52:54.940834 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:52:54.940845 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:52:54.940854 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:52:54.940862 kernel: iommu: Default domain type: Translated Nov 1 00:52:54.940871 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:52:54.940961 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 1 00:52:54.941052 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:52:54.941142 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 1 00:52:54.941153 kernel: vgaarb: loaded Nov 1 00:52:54.941162 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:52:54.941173 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:52:54.941182 kernel: PTP clock support registered Nov 1 00:52:54.941190 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:52:54.941199 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:52:54.941207 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:52:54.941216 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 1 00:52:54.941224 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:52:54.941232 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:52:54.941241 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:52:54.941251 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:52:54.941260 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:52:54.941269 kernel: pnp: PnP ACPI init Nov 1 00:52:54.941277 kernel: pnp: PnP ACPI: found 4 devices Nov 1 00:52:54.941286 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:52:54.941295 kernel: NET: Registered PF_INET protocol family Nov 1 00:52:54.941303 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:52:54.941312 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:52:54.941322 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:52:54.941331 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:52:54.941340 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:52:54.941348 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:52:54.941357 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:52:54.941365 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:52:54.941373 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:52:54.941382 kernel: NET: Registered PF_XDP protocol family Nov 1 00:52:54.941482 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:52:54.941575 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:52:54.941657 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:52:54.941738 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:52:54.950916 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 1 00:52:54.951037 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 1 00:52:54.951137 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:52:54.951231 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:52:54.951243 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:52:54.951342 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x740 took 35588 usecs Nov 1 00:52:54.951354 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:52:54.951363 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:52:54.951372 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 1 00:52:54.951380 kernel: Initialise system trusted keyrings Nov 1 00:52:54.951389 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:52:54.951398 kernel: Key type asymmetric registered Nov 1 00:52:54.951407 kernel: Asymmetric key parser 'x509' registered Nov 1 00:52:54.951415 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:52:54.951427 kernel: io scheduler mq-deadline registered Nov 1 00:52:54.951435 kernel: io scheduler kyber registered Nov 1 00:52:54.951443 kernel: io scheduler bfq registered Nov 1 00:52:54.951452 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:52:54.951461 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 1 00:52:54.951469 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:52:54.951478 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:52:54.951486 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:52:54.951495 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:52:54.951505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:52:54.951514 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:52:54.951523 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:52:54.951532 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:52:54.951674 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:52:54.951796 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:52:54.951892 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:52:54 UTC (1761958374) Nov 1 00:52:54.951989 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:52:54.952002 kernel: intel_pstate: CPU model not supported Nov 1 00:52:54.952012 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:52:54.952023 kernel: Segment Routing with IPv6 Nov 1 00:52:54.952032 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:52:54.952043 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:52:54.952052 kernel: Key type dns_resolver registered Nov 1 00:52:54.952061 kernel: IPI shorthand broadcast: enabled Nov 1 00:52:54.952072 kernel: sched_clock: Marking stable (763537758, 222757583)->(1198420274, -212124933) Nov 1 00:52:54.952083 kernel: registered taskstats version 1 Nov 1 00:52:54.952095 kernel: Loading compiled-in X.509 certificates Nov 1 00:52:54.952104 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:52:54.952112 kernel: Key type .fscrypt registered Nov 1 00:52:54.952121 kernel: Key type fscrypt-provisioning registered Nov 1 00:52:54.952130 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:52:54.952138 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:52:54.952147 kernel: ima: No architecture policies found Nov 1 00:52:54.952155 kernel: clk: Disabling unused clocks Nov 1 00:52:54.952166 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:52:54.952175 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:52:54.952183 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:52:54.952192 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:52:54.952200 kernel: Run /init as init process Nov 1 00:52:54.952209 kernel: with arguments: Nov 1 00:52:54.952233 kernel: /init Nov 1 00:52:54.952244 kernel: with environment: Nov 1 00:52:54.952253 kernel: HOME=/ Nov 1 00:52:54.952263 kernel: TERM=linux Nov 1 00:52:54.952273 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:52:54.952285 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:52:54.952297 systemd[1]: Detected virtualization kvm. Nov 1 00:52:54.952307 systemd[1]: Detected architecture x86-64. Nov 1 00:52:54.952317 systemd[1]: Running in initrd. Nov 1 00:52:54.952326 systemd[1]: No hostname configured, using default hostname. Nov 1 00:52:54.952335 systemd[1]: Hostname set to . Nov 1 00:52:54.952347 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:52:54.952357 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:52:54.952366 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:52:54.952375 systemd[1]: Reached target cryptsetup.target. Nov 1 00:52:54.952384 systemd[1]: Reached target paths.target. Nov 1 00:52:54.952394 systemd[1]: Reached target slices.target. Nov 1 00:52:54.952403 systemd[1]: Reached target swap.target. Nov 1 00:52:54.952412 systemd[1]: Reached target timers.target. Nov 1 00:52:54.952424 systemd[1]: Listening on iscsid.socket. Nov 1 00:52:54.952434 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:52:54.952445 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:52:54.952455 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:52:54.952464 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:52:54.952473 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:52:54.952482 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:52:54.952492 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:52:54.952503 systemd[1]: Reached target sockets.target. Nov 1 00:52:54.952513 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:52:54.952525 systemd[1]: Finished network-cleanup.service. Nov 1 00:52:54.952534 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:52:54.952544 systemd[1]: Starting systemd-journald.service... Nov 1 00:52:54.952555 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:52:54.952565 systemd[1]: Starting systemd-resolved.service... Nov 1 00:52:54.952574 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:52:54.952584 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:52:54.952599 systemd-journald[184]: Journal started Nov 1 00:52:54.952657 systemd-journald[184]: Runtime Journal (/run/log/journal/36a8f3cfd0e142c987a0fc58a95e299a) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:52:54.927179 systemd-modules-load[185]: Inserted module 'overlay' Nov 1 00:52:55.049630 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:52:55.049658 kernel: Bridge firewalling registered Nov 1 00:52:55.049671 kernel: SCSI subsystem initialized Nov 1 00:52:55.049682 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:52:55.049703 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:52:55.049714 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:52:55.049726 systemd[1]: Started systemd-journald.service. Nov 1 00:52:55.049741 kernel: audit: type=1130 audit(1761958375.040:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:54.967288 systemd-resolved[186]: Positive Trust Anchors: Nov 1 00:52:55.056209 kernel: audit: type=1130 audit(1761958375.049:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:54.967300 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:52:55.063228 kernel: audit: type=1130 audit(1761958375.056:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:54.967331 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:52:55.071831 kernel: audit: type=1130 audit(1761958375.063:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:54.970000 systemd-resolved[186]: Defaulting to hostname 'linux'. Nov 1 00:52:55.078547 kernel: audit: type=1130 audit(1761958375.071:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:54.979281 systemd-modules-load[185]: Inserted module 'br_netfilter' Nov 1 00:52:55.085374 kernel: audit: type=1130 audit(1761958375.078:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.016069 systemd-modules-load[185]: Inserted module 'dm_multipath' Nov 1 00:52:55.050418 systemd[1]: Started systemd-resolved.service. Nov 1 00:52:55.057126 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:52:55.064125 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:52:55.072631 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:52:55.079367 systemd[1]: Reached target nss-lookup.target. Nov 1 00:52:55.086870 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:52:55.089121 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:52:55.094424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:52:55.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.106282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:52:55.120184 kernel: audit: type=1130 audit(1761958375.106:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.107159 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:52:55.131173 kernel: audit: type=1130 audit(1761958375.118:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.131200 kernel: audit: type=1130 audit(1761958375.126:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.126610 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:52:55.128199 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:52:55.139103 dracut-cmdline[206]: dracut-dracut-053 Nov 1 00:52:55.142293 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:52:55.212789 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:52:55.233781 kernel: iscsi: registered transport (tcp) Nov 1 00:52:55.263208 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:52:55.263288 kernel: QLogic iSCSI HBA Driver Nov 1 00:52:55.300573 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:52:55.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.302481 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:52:55.356830 kernel: raid6: avx2x4 gen() 30618 MB/s Nov 1 00:52:55.374821 kernel: raid6: avx2x4 xor() 9468 MB/s Nov 1 00:52:55.392805 kernel: raid6: avx2x2 gen() 30837 MB/s Nov 1 00:52:55.410800 kernel: raid6: avx2x2 xor() 15729 MB/s Nov 1 00:52:55.428795 kernel: raid6: avx2x1 gen() 25186 MB/s Nov 1 00:52:55.446798 kernel: raid6: avx2x1 xor() 14500 MB/s Nov 1 00:52:55.464803 kernel: raid6: sse2x4 gen() 12142 MB/s Nov 1 00:52:55.482812 kernel: raid6: sse2x4 xor() 6092 MB/s Nov 1 00:52:55.500792 kernel: raid6: sse2x2 gen() 10995 MB/s Nov 1 00:52:55.518799 kernel: raid6: sse2x2 xor() 7060 MB/s Nov 1 00:52:55.536794 kernel: raid6: sse2x1 gen() 8818 MB/s Nov 1 00:52:55.555789 kernel: raid6: sse2x1 xor() 5008 MB/s Nov 1 00:52:55.555835 kernel: raid6: using algorithm avx2x2 gen() 30837 MB/s Nov 1 00:52:55.555851 kernel: raid6: .... xor() 15729 MB/s, rmw enabled Nov 1 00:52:55.557295 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:52:55.574785 kernel: xor: automatically using best checksumming function avx Nov 1 00:52:55.692798 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:52:55.703364 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:52:55.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.703000 audit: BPF prog-id=7 op=LOAD Nov 1 00:52:55.704000 audit: BPF prog-id=8 op=LOAD Nov 1 00:52:55.705292 systemd[1]: Starting systemd-udevd.service... Nov 1 00:52:55.720189 systemd-udevd[384]: Using default interface naming scheme 'v252'. Nov 1 00:52:55.724878 systemd[1]: Started systemd-udevd.service. Nov 1 00:52:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.730428 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:52:55.744991 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Nov 1 00:52:55.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.778734 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:52:55.780231 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:52:55.826851 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:52:55.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:55.888776 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:52:55.978206 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:52:55.978349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:52:55.978362 kernel: GPT:9289727 != 125829119 Nov 1 00:52:55.978373 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:52:55.978384 kernel: GPT:9289727 != 125829119 Nov 1 00:52:55.978395 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:52:55.978406 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:52:55.978416 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:52:55.978430 kernel: libata version 3.00 loaded. Nov 1 00:52:55.978441 kernel: ACPI: bus type USB registered Nov 1 00:52:55.978452 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 1 00:52:55.978561 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:52:55.978573 kernel: AES CTR mode by8 optimization enabled Nov 1 00:52:55.978584 kernel: scsi host1: ata_piix Nov 1 00:52:55.978696 kernel: scsi host2: ata_piix Nov 1 00:52:55.978867 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 1 00:52:55.978886 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 1 00:52:55.978901 kernel: usbcore: registered new interface driver usbfs Nov 1 00:52:55.978912 kernel: usbcore: registered new interface driver hub Nov 1 00:52:55.978923 kernel: usbcore: registered new device driver usb Nov 1 00:52:55.979790 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 1 00:52:56.142786 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (440) Nov 1 00:52:56.149935 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:52:56.156260 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:52:56.160819 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:52:56.162406 systemd[1]: Starting disk-uuid.service... Nov 1 00:52:56.168690 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:52:56.171435 kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 1 00:52:56.171458 disk-uuid[511]: Primary Header is updated. Nov 1 00:52:56.171458 disk-uuid[511]: Secondary Entries is updated. Nov 1 00:52:56.171458 disk-uuid[511]: Secondary Header is updated. Nov 1 00:52:56.177432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:52:56.182011 kernel: ehci-pci: EHCI PCI platform driver Nov 1 00:52:56.182034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:52:56.188773 kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 1 00:52:56.188823 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:52:56.212410 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 1 00:52:56.219875 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 1 00:52:56.220053 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 1 00:52:56.220166 kernel: uhci_hcd 0000:00:01.2: irq 11, io base 0x0000c180 Nov 1 00:52:56.220273 kernel: hub 1-0:1.0: USB hub found Nov 1 00:52:56.220428 kernel: hub 1-0:1.0: 2 ports detected Nov 1 00:52:57.189219 disk-uuid[515]: The operation has completed successfully. Nov 1 00:52:57.190363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:52:57.224325 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:52:57.224424 systemd[1]: Finished disk-uuid.service. Nov 1 00:52:57.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.226003 systemd[1]: Starting verity-setup.service... Nov 1 00:52:57.244775 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:52:57.282491 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:52:57.285203 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:52:57.286988 systemd[1]: Finished verity-setup.service. Nov 1 00:52:57.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.374786 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:52:57.375372 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:52:57.376246 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:52:57.376976 systemd[1]: Starting ignition-setup.service... Nov 1 00:52:57.381477 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:52:57.395178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:52:57.395220 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:52:57.395233 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:52:57.410634 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:52:57.417106 systemd[1]: Finished ignition-setup.service. Nov 1 00:52:57.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.418613 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:52:57.525302 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:52:57.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.527000 audit: BPF prog-id=9 op=LOAD Nov 1 00:52:57.529279 systemd[1]: Starting systemd-networkd.service... Nov 1 00:52:57.546672 ignition[608]: Ignition 2.14.0 Nov 1 00:52:57.547627 ignition[608]: Stage: fetch-offline Nov 1 00:52:57.547692 ignition[608]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:57.547719 ignition[608]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:57.552001 ignition[608]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:57.553065 ignition[608]: parsed url from cmdline: "" Nov 1 00:52:57.553131 ignition[608]: no config URL provided Nov 1 00:52:57.554300 ignition[608]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:52:57.554314 ignition[608]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:52:57.554320 ignition[608]: failed to fetch config: resource requires networking Nov 1 00:52:57.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.557838 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:52:57.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.554894 ignition[608]: Ignition finished successfully Nov 1 00:52:57.558799 systemd-networkd[689]: lo: Link UP Nov 1 00:52:57.558803 systemd-networkd[689]: lo: Gained carrier Nov 1 00:52:57.559359 systemd-networkd[689]: Enumeration completed Nov 1 00:52:57.559724 systemd-networkd[689]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:52:57.559742 systemd[1]: Started systemd-networkd.service. Nov 1 00:52:57.560697 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 1 00:52:57.561044 systemd[1]: Reached target network.target. Nov 1 00:52:57.561822 systemd-networkd[689]: eth1: Link UP Nov 1 00:52:57.561826 systemd-networkd[689]: eth1: Gained carrier Nov 1 00:52:57.562611 systemd[1]: Starting ignition-fetch.service... Nov 1 00:52:57.564601 systemd[1]: Starting iscsiuio.service... Nov 1 00:52:57.583100 systemd-networkd[689]: eth0: Link UP Nov 1 00:52:57.583110 systemd-networkd[689]: eth0: Gained carrier Nov 1 00:52:57.593701 ignition[691]: Ignition 2.14.0 Nov 1 00:52:57.593713 ignition[691]: Stage: fetch Nov 1 00:52:57.593850 ignition[691]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:57.593869 ignition[691]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:57.595581 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:57.595677 ignition[691]: parsed url from cmdline: "" Nov 1 00:52:57.595681 ignition[691]: no config URL provided Nov 1 00:52:57.595686 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:52:57.600107 systemd-networkd[689]: eth1: DHCPv4 address 10.124.0.34/20 acquired from 169.254.169.253 Nov 1 00:52:57.595694 ignition[691]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:52:57.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.601839 systemd[1]: Started iscsiuio.service. Nov 1 00:52:57.595722 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 1 00:52:57.604549 systemd[1]: Starting iscsid.service... Nov 1 00:52:57.604679 ignition[691]: GET error: Get "http://169.254.169.254/metadata/v1/user-data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:52:57.604929 systemd-networkd[689]: eth0: DHCPv4 address 144.126.212.254/20, gateway 144.126.208.1 acquired from 169.254.169.253 Nov 1 00:52:57.611444 iscsid[699]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:52:57.611444 iscsid[699]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:52:57.611444 iscsid[699]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:52:57.611444 iscsid[699]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:52:57.611444 iscsid[699]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:52:57.611444 iscsid[699]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:52:57.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.614292 systemd[1]: Started iscsid.service. Nov 1 00:52:57.617208 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:52:57.630696 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:52:57.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.631458 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:52:57.632650 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:52:57.633971 systemd[1]: Reached target remote-fs.target. Nov 1 00:52:57.636089 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:52:57.645299 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:52:57.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.804926 ignition[691]: GET http://169.254.169.254/metadata/v1/user-data: attempt #2 Nov 1 00:52:57.829200 ignition[691]: GET result: OK Nov 1 00:52:57.829494 ignition[691]: parsing config with SHA512: a1be319ab1d61735f75beb12db2b979b4a2d684b95acb79e7c858a033662faebe215e409456725b9af09dcc2df4dd882699b271949746ba467c5c713c2740a49 Nov 1 00:52:57.840117 unknown[691]: fetched base config from "system" Nov 1 00:52:57.840131 unknown[691]: fetched base config from "system" Nov 1 00:52:57.840619 ignition[691]: fetch: fetch complete Nov 1 00:52:57.840137 unknown[691]: fetched user config from "digitalocean" Nov 1 00:52:57.840625 ignition[691]: fetch: fetch passed Nov 1 00:52:57.843102 systemd[1]: Finished ignition-fetch.service. Nov 1 00:52:57.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.840667 ignition[691]: Ignition finished successfully Nov 1 00:52:57.844965 systemd[1]: Starting ignition-kargs.service... Nov 1 00:52:57.854677 ignition[713]: Ignition 2.14.0 Nov 1 00:52:57.854689 ignition[713]: Stage: kargs Nov 1 00:52:57.854832 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:57.854858 ignition[713]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:57.856608 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:57.857897 ignition[713]: kargs: kargs passed Nov 1 00:52:57.858738 systemd[1]: Finished ignition-kargs.service. Nov 1 00:52:57.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.857939 ignition[713]: Ignition finished successfully Nov 1 00:52:57.860655 systemd[1]: Starting ignition-disks.service... Nov 1 00:52:57.868944 ignition[719]: Ignition 2.14.0 Nov 1 00:52:57.868956 ignition[719]: Stage: disks Nov 1 00:52:57.869068 ignition[719]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:57.869092 ignition[719]: parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:57.871214 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:57.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.875567 systemd[1]: Finished ignition-disks.service. Nov 1 00:52:57.874841 ignition[719]: disks: disks passed Nov 1 00:52:57.876408 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:52:57.874894 ignition[719]: Ignition finished successfully Nov 1 00:52:57.877137 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:52:57.878418 systemd[1]: Reached target local-fs.target. Nov 1 00:52:57.879690 systemd[1]: Reached target sysinit.target. Nov 1 00:52:57.881077 systemd[1]: Reached target basic.target. Nov 1 00:52:57.883316 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:52:57.900433 systemd-fsck[727]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:52:57.903964 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:52:57.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:57.907131 systemd[1]: Mounting sysroot.mount... Nov 1 00:52:57.916778 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:52:57.916797 systemd[1]: Mounted sysroot.mount. Nov 1 00:52:57.917494 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:52:57.919845 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:52:57.921227 systemd[1]: Starting flatcar-digitalocean-network.service... Nov 1 00:52:57.923128 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:52:57.923983 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:52:57.924024 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:52:57.930690 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:52:57.933609 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:52:57.945884 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:52:57.959609 initrd-setup-root[747]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:52:57.968036 initrd-setup-root[757]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:52:57.978705 initrd-setup-root[767]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:52:58.040352 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:52:58.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.042096 systemd[1]: Starting ignition-mount.service... Nov 1 00:52:58.048901 coreos-metadata[733]: Nov 01 00:52:58.048 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:52:58.043506 systemd[1]: Starting sysroot-boot.service... Nov 1 00:52:58.060291 coreos-metadata[733]: Nov 01 00:52:58.060 INFO Fetch successful Nov 1 00:52:58.061267 bash[785]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:52:58.072728 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 1 00:52:58.072882 systemd[1]: Finished flatcar-digitalocean-network.service. Nov 1 00:52:58.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-digitalocean-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.078285 coreos-metadata[734]: Nov 01 00:52:58.078 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:52:58.081124 ignition[786]: INFO : Ignition 2.14.0 Nov 1 00:52:58.082052 ignition[786]: INFO : Stage: mount Nov 1 00:52:58.083388 ignition[786]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:58.084443 ignition[786]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:58.087834 ignition[786]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:58.089766 ignition[786]: INFO : mount: mount passed Nov 1 00:52:58.092784 coreos-metadata[734]: Nov 01 00:52:58.091 INFO Fetch successful Nov 1 00:52:58.093610 ignition[786]: INFO : Ignition finished successfully Nov 1 00:52:58.095064 systemd[1]: Finished ignition-mount.service. Nov 1 00:52:58.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.096596 coreos-metadata[734]: Nov 01 00:52:58.095 INFO wrote hostname ci-3510.3.8-n-0efaf8214b to /sysroot/etc/hostname Nov 1 00:52:58.097593 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:52:58.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.102954 systemd[1]: Finished sysroot-boot.service. Nov 1 00:52:58.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:58.300273 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:52:58.316873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (794) Nov 1 00:52:58.321095 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:52:58.321138 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:52:58.321152 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:52:58.326861 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:52:58.328628 systemd[1]: Starting ignition-files.service... Nov 1 00:52:58.345217 ignition[814]: INFO : Ignition 2.14.0 Nov 1 00:52:58.345217 ignition[814]: INFO : Stage: files Nov 1 00:52:58.347001 ignition[814]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:58.347001 ignition[814]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:58.350253 ignition[814]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:58.352819 ignition[814]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:52:58.354274 ignition[814]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:52:58.354274 ignition[814]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:52:58.357017 ignition[814]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:52:58.358249 ignition[814]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:52:58.359335 ignition[814]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:52:58.359335 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:52:58.359335 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:52:58.359335 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:52:58.358585 unknown[814]: wrote ssh authorized keys file for user: core Nov 1 00:52:58.365588 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:52:58.413259 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:52:58.511845 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:52:58.513264 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:52:58.513264 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:52:58.513264 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:52:58.513264 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:52:58.513264 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:52:58.519067 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:52:58.519067 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:52:58.519067 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:52:58.523439 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:52:58.817009 systemd-networkd[689]: eth1: Gained IPv6LL Nov 1 00:52:58.918733 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:52:59.273365 ignition[814]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:52:59.273365 ignition[814]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:52:59.273365 ignition[814]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:52:59.273365 ignition[814]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:52:59.277823 ignition[814]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:52:59.302614 kernel: kauditd_printk_skb: 29 callbacks suppressed Nov 1 00:52:59.302648 kernel: audit: type=1130 audit(1761958379.284:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.302732 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:52:59.302732 ignition[814]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:52:59.302732 ignition[814]: INFO : files: files passed Nov 1 00:52:59.302732 ignition[814]: INFO : Ignition finished successfully Nov 1 00:52:59.325643 kernel: audit: type=1130 audit(1761958379.302:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.325669 kernel: audit: type=1131 audit(1761958379.302:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.325682 kernel: audit: type=1130 audit(1761958379.315:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.283942 systemd[1]: Finished ignition-files.service. Nov 1 00:52:59.286738 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:52:59.327819 initrd-setup-root-after-ignition[839]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:52:59.295795 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:52:59.296565 systemd[1]: Starting ignition-quench.service... Nov 1 00:52:59.301494 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:52:59.301593 systemd[1]: Finished ignition-quench.service. Nov 1 00:52:59.310379 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:52:59.316916 systemd[1]: Reached target ignition-complete.target. Nov 1 00:52:59.325085 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:52:59.328992 systemd-networkd[689]: eth0: Gained IPv6LL Nov 1 00:52:59.341618 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:52:59.353995 kernel: audit: type=1130 audit(1761958379.341:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.354028 kernel: audit: type=1131 audit(1761958379.341:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.341719 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:52:59.342539 systemd[1]: Reached target initrd-fs.target. Nov 1 00:52:59.354620 systemd[1]: Reached target initrd.target. Nov 1 00:52:59.355869 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:52:59.356650 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:52:59.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.370328 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:52:59.383884 kernel: audit: type=1130 audit(1761958379.370:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.381873 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:52:59.391211 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:52:59.391403 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:52:59.404268 kernel: audit: type=1130 audit(1761958379.392:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.404294 kernel: audit: type=1131 audit(1761958379.392:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.393472 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:52:59.404860 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:52:59.406191 systemd[1]: Stopped target timers.target. Nov 1 00:52:59.407517 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:52:59.415145 kernel: audit: type=1131 audit(1761958379.408:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.407578 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:52:59.408859 systemd[1]: Stopped target initrd.target. Nov 1 00:52:59.415736 systemd[1]: Stopped target basic.target. Nov 1 00:52:59.417080 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:52:59.418366 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:52:59.419640 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:52:59.420992 systemd[1]: Stopped target remote-fs.target. Nov 1 00:52:59.422316 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:52:59.423661 systemd[1]: Stopped target sysinit.target. Nov 1 00:52:59.424955 systemd[1]: Stopped target local-fs.target. Nov 1 00:52:59.426290 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:52:59.427648 systemd[1]: Stopped target swap.target. Nov 1 00:52:59.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.428984 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:52:59.429045 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:52:59.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.430380 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:52:59.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.431559 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:52:59.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.431604 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:52:59.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.433024 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:52:59.433065 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:52:59.434289 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:52:59.434329 systemd[1]: Stopped ignition-files.service. Nov 1 00:52:59.435558 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:52:59.447342 iscsid[699]: iscsid shutting down. Nov 1 00:52:59.435597 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:52:59.437802 systemd[1]: Stopping ignition-mount.service... Nov 1 00:52:59.449564 systemd[1]: Stopping iscsid.service... Nov 1 00:52:59.451204 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:52:59.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.461614 ignition[852]: INFO : Ignition 2.14.0 Nov 1 00:52:59.461614 ignition[852]: INFO : Stage: umount Nov 1 00:52:59.461614 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:52:59.461614 ignition[852]: DEBUG : parsing config with SHA512: 865c03baa79b8c74023d13a0b3666474fa06a165421a1e05731b76e0f557d42c5c89d4870a0b9c4182ad7d4d8209de20dca9c9da63d637e0410fbd60314cac6c Nov 1 00:52:59.461614 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 1 00:52:59.461614 ignition[852]: INFO : umount: umount passed Nov 1 00:52:59.461614 ignition[852]: INFO : Ignition finished successfully Nov 1 00:52:59.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.453760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:52:59.453826 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:52:59.454510 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:52:59.454547 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:52:59.455551 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:52:59.455661 systemd[1]: Stopped iscsid.service. Nov 1 00:52:59.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.457060 systemd[1]: Stopping iscsiuio.service... Nov 1 00:52:59.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.541000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:52:59.460932 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:52:59.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.461028 systemd[1]: Stopped iscsiuio.service. Nov 1 00:52:59.463823 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:52:59.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.464021 systemd[1]: Stopped ignition-mount.service. Nov 1 00:52:59.465876 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:52:59.465922 systemd[1]: Stopped ignition-disks.service. Nov 1 00:52:59.466550 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:52:59.466591 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:52:59.467242 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:52:59.467282 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:52:59.467907 systemd[1]: Stopped target network.target. Nov 1 00:52:59.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.468502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:52:59.468540 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:52:59.469205 systemd[1]: Stopped target paths.target. Nov 1 00:52:59.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.469814 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:52:59.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.471874 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:52:59.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.472810 systemd[1]: Stopped target slices.target. Nov 1 00:52:59.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.473394 systemd[1]: Stopped target sockets.target. Nov 1 00:52:59.474067 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:52:59.474104 systemd[1]: Closed iscsid.socket. Nov 1 00:52:59.474694 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:52:59.474724 systemd[1]: Closed iscsiuio.socket. Nov 1 00:52:59.495366 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:52:59.495441 systemd[1]: Stopped ignition-setup.service. Nov 1 00:52:59.496897 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:52:59.519063 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:52:59.519236 systemd-networkd[689]: eth0: DHCPv6 lease lost Nov 1 00:52:59.521015 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:52:59.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.536608 systemd-networkd[689]: eth1: DHCPv6 lease lost Nov 1 00:52:59.578000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:52:59.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.537738 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:52:59.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.537936 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:52:59.540003 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:52:59.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:52:59.540137 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:52:59.541745 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:52:59.541904 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:52:59.543912 systemd[1]: Stopping network-cleanup.service... Nov 1 00:52:59.544591 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:52:59.544663 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:52:59.545424 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:52:59.545497 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:52:59.546514 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:52:59.595000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:52:59.596000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:52:59.599000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:52:59.601000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:52:59.601000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:52:59.546561 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:52:59.552650 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:52:59.555140 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:52:59.555640 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:52:59.555715 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:52:59.558630 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:52:59.558837 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:52:59.560047 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:52:59.560125 systemd[1]: Stopped network-cleanup.service. Nov 1 00:52:59.561049 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:52:59.561088 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:52:59.562267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:52:59.562300 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:52:59.563673 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:52:59.563720 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:52:59.565015 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:52:59.565054 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:52:59.620793 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Nov 1 00:52:59.566328 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:52:59.566370 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:52:59.567864 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:52:59.567906 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:52:59.569879 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:52:59.577582 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:52:59.577646 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:52:59.579323 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:52:59.579384 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:52:59.580304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:52:59.580345 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:52:59.582508 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:52:59.582975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:52:59.583060 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:52:59.583948 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:52:59.585953 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:52:59.594152 systemd[1]: Switching root. Nov 1 00:52:59.631422 systemd-journald[184]: Journal stopped Nov 1 00:53:03.218054 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:53:03.218115 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:53:03.218129 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:53:03.218141 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:53:03.218153 kernel: SELinux: policy capability open_perms=1 Nov 1 00:53:03.218164 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:53:03.218180 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:53:03.218191 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:53:03.218207 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:53:03.218222 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:53:03.218237 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:53:03.218250 systemd[1]: Successfully loaded SELinux policy in 53.542ms. Nov 1 00:53:03.218272 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.809ms. Nov 1 00:53:03.218285 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:53:03.218307 systemd[1]: Detected virtualization kvm. Nov 1 00:53:03.218320 systemd[1]: Detected architecture x86-64. Nov 1 00:53:03.218341 systemd[1]: Detected first boot. Nov 1 00:53:03.218354 systemd[1]: Hostname set to . Nov 1 00:53:03.218371 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:53:03.218384 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:53:03.218396 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:53:03.218408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:53:03.218431 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:53:03.218445 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:53:03.218458 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:53:03.218470 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:53:03.218482 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:53:03.218495 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:53:03.218508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:53:03.218520 systemd[1]: Created slice system-getty.slice. Nov 1 00:53:03.218538 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:53:03.218551 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:53:03.218564 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:53:03.218576 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:53:03.218588 systemd[1]: Created slice user.slice. Nov 1 00:53:03.218600 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:53:03.218613 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:53:03.218631 systemd[1]: Set up automount boot.automount. Nov 1 00:53:03.218648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:53:03.218675 systemd[1]: Reached target integritysetup.target. Nov 1 00:53:03.218689 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:53:03.218701 systemd[1]: Reached target remote-fs.target. Nov 1 00:53:03.218714 systemd[1]: Reached target slices.target. Nov 1 00:53:03.218726 systemd[1]: Reached target swap.target. Nov 1 00:53:03.218739 systemd[1]: Reached target torcx.target. Nov 1 00:53:03.218762 systemd[1]: Reached target veritysetup.target. Nov 1 00:53:03.218776 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:53:03.218789 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:53:03.218801 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:53:03.218814 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:53:03.218825 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:53:03.218837 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:53:03.218849 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:53:03.218861 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:53:03.218874 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:53:03.218888 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:53:03.218900 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:53:03.218912 systemd[1]: Mounting media.mount... Nov 1 00:53:03.218931 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:03.218944 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:53:03.218956 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:53:03.218969 systemd[1]: Mounting tmp.mount... Nov 1 00:53:03.218981 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:53:03.218994 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:03.219009 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:53:03.219021 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:53:03.219033 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:03.219046 systemd[1]: Starting modprobe@drm.service... Nov 1 00:53:03.219057 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:03.219070 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:53:03.219082 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:03.219094 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:53:03.219107 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:53:03.219122 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:53:03.219134 systemd[1]: Starting systemd-journald.service... Nov 1 00:53:03.219146 kernel: fuse: init (API version 7.34) Nov 1 00:53:03.219158 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:53:03.219170 kernel: loop: module loaded Nov 1 00:53:03.219182 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:53:03.219200 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:53:03.219212 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:53:03.219225 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:03.219241 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:53:03.219253 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:53:03.219265 systemd[1]: Mounted media.mount. Nov 1 00:53:03.219277 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:53:03.219289 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:53:03.219302 systemd[1]: Mounted tmp.mount. Nov 1 00:53:03.219314 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:53:03.219326 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:53:03.219342 systemd-journald[1006]: Journal started Nov 1 00:53:03.219391 systemd-journald[1006]: Runtime Journal (/run/log/journal/36a8f3cfd0e142c987a0fc58a95e299a) is 4.9M, max 39.5M, 34.5M free. Nov 1 00:53:03.012000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:53:03.012000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:53:03.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.215000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:53:03.215000 audit[1006]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe1ac72380 a2=4000 a3=7ffe1ac7241c items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:03.215000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:53:03.223786 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:53:03.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.225825 systemd[1]: Started systemd-journald.service. Nov 1 00:53:03.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.227340 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:03.227512 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:03.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.230587 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:53:03.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.230832 systemd[1]: Finished modprobe@drm.service. Nov 1 00:53:03.231720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:03.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.231926 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:03.232842 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:53:03.232983 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:53:03.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.233890 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:03.234043 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:03.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.235108 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:53:03.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.238069 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:53:03.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.240172 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:53:03.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.242888 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:53:03.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.243923 systemd[1]: Reached target network-pre.target. Nov 1 00:53:03.246240 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:53:03.248106 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:53:03.248845 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:53:03.252911 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:53:03.254766 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:53:03.256530 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:53:03.259825 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:53:03.264901 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:53:03.266203 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:53:03.270536 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:53:03.275075 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:53:03.275833 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:53:03.281478 systemd-journald[1006]: Time spent on flushing to /var/log/journal/36a8f3cfd0e142c987a0fc58a95e299a is 43.762ms for 1095 entries. Nov 1 00:53:03.281478 systemd-journald[1006]: System Journal (/var/log/journal/36a8f3cfd0e142c987a0fc58a95e299a) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:53:03.333462 systemd-journald[1006]: Received client request to flush runtime journal. Nov 1 00:53:03.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.290745 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:53:03.291649 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:53:03.306568 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:53:03.320046 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:53:03.322155 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:53:03.334234 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:53:03.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.346160 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:53:03.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.348194 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:53:03.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.354528 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:53:03.364778 udevadm[1048]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:53:03.808875 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:53:03.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.810900 systemd[1]: Starting systemd-udevd.service... Nov 1 00:53:03.834711 systemd-udevd[1051]: Using default interface naming scheme 'v252'. Nov 1 00:53:03.855357 systemd[1]: Started systemd-udevd.service. Nov 1 00:53:03.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.857889 systemd[1]: Starting systemd-networkd.service... Nov 1 00:53:03.866154 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:53:03.909042 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:03.909207 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:03.910351 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:03.912187 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:03.914893 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:03.915656 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:53:03.915760 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:53:03.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.917882 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:03.918180 systemd[1]: Started systemd-userdbd.service. Nov 1 00:53:03.919194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:03.919380 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:03.922989 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:03.923148 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:03.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.924040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:03.924182 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:03.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:03.925050 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:53:03.925101 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:53:03.991521 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:53:04.010874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:53:04.026208 systemd-networkd[1058]: lo: Link UP Nov 1 00:53:04.026223 systemd-networkd[1058]: lo: Gained carrier Nov 1 00:53:04.026871 systemd-networkd[1058]: Enumeration completed Nov 1 00:53:04.027001 systemd[1]: Started systemd-networkd.service. Nov 1 00:53:04.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.028093 systemd-networkd[1058]: eth1: Configuring with /run/systemd/network/10-3e:12:e1:61:47:e2.network. Nov 1 00:53:04.029401 systemd-networkd[1058]: eth0: Configuring with /run/systemd/network/10-ca:86:ff:94:d7:89.network. Nov 1 00:53:04.030324 systemd-networkd[1058]: eth1: Link UP Nov 1 00:53:04.030337 systemd-networkd[1058]: eth1: Gained carrier Nov 1 00:53:04.034030 systemd-networkd[1058]: eth0: Link UP Nov 1 00:53:04.034056 systemd-networkd[1058]: eth0: Gained carrier Nov 1 00:53:04.069781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:53:04.074776 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:53:04.089000 audit[1059]: AVC avc: denied { confidentiality } for pid=1059 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:53:04.089000 audit[1059]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563bae4fc360 a1=338ec a2=7f37a9097bc5 a3=5 items=110 ppid=1051 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:04.089000 audit: CWD cwd="/" Nov 1 00:53:04.089000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=1 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=2 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=3 name=(null) inode=14274 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=4 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=5 name=(null) inode=14275 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=6 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=7 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=8 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=9 name=(null) inode=14277 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=10 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=11 name=(null) inode=14278 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=12 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=13 name=(null) inode=14279 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=14 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=15 name=(null) inode=14280 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=16 name=(null) inode=14276 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=17 name=(null) inode=14281 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=18 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=19 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=20 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=21 name=(null) inode=14283 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=22 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=23 name=(null) inode=14284 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=24 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=25 name=(null) inode=14285 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=26 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=27 name=(null) inode=14286 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=28 name=(null) inode=14282 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=29 name=(null) inode=14287 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=30 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=31 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=32 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=33 name=(null) inode=14289 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=34 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=35 name=(null) inode=14290 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=36 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=37 name=(null) inode=14291 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=38 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=39 name=(null) inode=14292 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=40 name=(null) inode=14288 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=41 name=(null) inode=14293 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=42 name=(null) inode=14273 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=43 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=44 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=45 name=(null) inode=14295 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=46 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=47 name=(null) inode=14296 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=48 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=49 name=(null) inode=14297 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=50 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=51 name=(null) inode=14298 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=52 name=(null) inode=14294 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=53 name=(null) inode=14299 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=55 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=56 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=57 name=(null) inode=14301 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=58 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=59 name=(null) inode=14302 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=60 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=61 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=62 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=63 name=(null) inode=14304 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=64 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=65 name=(null) inode=14305 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=66 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=67 name=(null) inode=14306 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=68 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=69 name=(null) inode=14307 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=70 name=(null) inode=14303 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=71 name=(null) inode=14308 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=72 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=73 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=74 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=75 name=(null) inode=14310 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=76 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=77 name=(null) inode=14311 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=78 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=79 name=(null) inode=14312 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=80 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=81 name=(null) inode=14313 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=82 name=(null) inode=14309 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=83 name=(null) inode=14314 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=84 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=85 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=86 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=87 name=(null) inode=14316 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=88 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=89 name=(null) inode=14317 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=90 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=91 name=(null) inode=14318 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=92 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=93 name=(null) inode=14319 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=94 name=(null) inode=14315 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=95 name=(null) inode=14320 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=96 name=(null) inode=14300 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=97 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=98 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=99 name=(null) inode=14322 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=100 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=101 name=(null) inode=14323 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=102 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=103 name=(null) inode=14324 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=104 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=105 name=(null) inode=14325 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=106 name=(null) inode=14321 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=107 name=(null) inode=14326 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PATH item=109 name=(null) inode=14327 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:53:04.089000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:53:04.131773 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 1 00:53:04.156774 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:53:04.164808 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:53:04.308783 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:53:04.331228 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:53:04.340534 kernel: kauditd_printk_skb: 203 callbacks suppressed Nov 1 00:53:04.340640 kernel: audit: type=1130 audit(1761958384.331:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.333937 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:53:04.357572 lvm[1094]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:53:04.381024 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:53:04.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.381939 systemd[1]: Reached target cryptsetup.target. Nov 1 00:53:04.390845 kernel: audit: type=1130 audit(1761958384.381:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.390197 systemd[1]: Starting lvm2-activation.service... Nov 1 00:53:04.396360 lvm[1096]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:53:04.422969 systemd[1]: Finished lvm2-activation.service. Nov 1 00:53:04.423863 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:53:04.426011 systemd[1]: Mounting media-configdrive.mount... Nov 1 00:53:04.426729 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:53:04.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.427735 systemd[1]: Reached target machines.target. Nov 1 00:53:04.433809 kernel: audit: type=1130 audit(1761958384.422:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.434959 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:53:04.446769 kernel: ISO 9660 Extensions: RRIP_1991A Nov 1 00:53:04.448018 systemd[1]: Mounted media-configdrive.mount. Nov 1 00:53:04.448740 systemd[1]: Reached target local-fs.target. Nov 1 00:53:04.450794 systemd[1]: Starting ldconfig.service... Nov 1 00:53:04.452352 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:53:04.452712 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:04.457896 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:53:04.460401 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:53:04.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.462664 systemd[1]: Starting systemd-sysext.service... Nov 1 00:53:04.466135 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:53:04.475772 kernel: audit: type=1130 audit(1761958384.468:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.483698 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Nov 1 00:53:04.486832 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:53:04.493952 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:53:04.501966 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:53:04.502225 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:53:04.525190 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:53:04.544911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:53:04.545611 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:53:04.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.554036 kernel: audit: type=1130 audit(1761958384.545:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.568791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:53:04.586836 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:53:04.598346 (sd-sysext)[1119]: Using extensions 'kubernetes'. Nov 1 00:53:04.598828 (sd-sysext)[1119]: Merged extensions into '/usr'. Nov 1 00:53:04.622031 systemd-fsck[1115]: fsck.fat 4.2 (2021-01-31) Nov 1 00:53:04.622031 systemd-fsck[1115]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:53:04.629014 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:04.632204 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:53:04.633722 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:04.637532 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:04.644458 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:04.653789 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:04.655065 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:53:04.655235 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:04.655728 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:04.661126 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:53:04.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.673032 kernel: audit: type=1130 audit(1761958384.663:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.676443 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:53:04.679247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:04.679421 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:04.682299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:04.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.682481 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:04.689076 kernel: audit: type=1130 audit(1761958384.681:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.689155 kernel: audit: type=1131 audit(1761958384.681:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.697568 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:04.697995 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:04.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.706097 systemd[1]: Mounting boot.mount... Nov 1 00:53:04.709817 kernel: audit: type=1130 audit(1761958384.695:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.709899 kernel: audit: type=1131 audit(1761958384.695:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.711157 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:53:04.711518 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:53:04.713575 systemd[1]: Finished systemd-sysext.service. Nov 1 00:53:04.718518 systemd[1]: Starting ensure-sysext.service... Nov 1 00:53:04.721660 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:53:04.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:04.725542 systemd[1]: Mounted boot.mount. Nov 1 00:53:04.744888 systemd[1]: Reloading. Nov 1 00:53:04.767840 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:53:04.777327 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:53:04.781111 systemd-tmpfiles[1137]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:53:04.871647 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:53:04.892010 /usr/lib/systemd/system-generators/torcx-generator[1157]: time="2025-11-01T00:53:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:53:04.892420 /usr/lib/systemd/system-generators/torcx-generator[1157]: time="2025-11-01T00:53:04Z" level=info msg="torcx already run" Nov 1 00:53:04.999728 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:53:04.999761 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:53:05.019862 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:53:05.077301 systemd[1]: Finished ldconfig.service. Nov 1 00:53:05.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.078976 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:53:05.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.081609 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:53:05.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.087049 systemd[1]: Starting audit-rules.service... Nov 1 00:53:05.089810 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:53:05.092622 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:53:05.096039 systemd[1]: Starting systemd-resolved.service... Nov 1 00:53:05.100370 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:53:05.102935 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:53:05.105046 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:53:05.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.118263 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:53:05.121613 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.125939 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:05.127000 audit[1219]: SYSTEM_BOOT pid=1219 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.129512 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:05.132282 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:05.135900 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.136312 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:05.136471 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:53:05.142831 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:53:05.144345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:05.144567 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:05.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.164357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:05.164554 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:05.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.165710 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:05.166127 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:05.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.170026 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.174139 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:05.180567 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:05.183896 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:05.184818 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.185023 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:05.188042 systemd[1]: Starting systemd-update-done.service... Nov 1 00:53:05.189085 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:53:05.198052 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:53:05.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.205070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:05.205366 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:05.207475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:05.207648 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:05.208990 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:05.209152 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:05.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.214419 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.218542 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:53:05.220768 systemd[1]: Starting modprobe@drm.service... Nov 1 00:53:05.224030 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:53:05.235217 systemd[1]: Starting modprobe@loop.service... Nov 1 00:53:05.236147 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.236364 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:05.238544 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:53:05.239731 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:53:05.241740 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:53:05.242020 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:53:05.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:05.252000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:53:05.252000 audit[1250]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe076e2920 a2=420 a3=0 items=0 ppid=1213 pid=1250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:05.252000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:53:05.255317 augenrules[1250]: No rules Nov 1 00:53:05.247425 systemd[1]: Finished systemd-update-done.service. Nov 1 00:53:05.248525 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:53:05.248672 systemd[1]: Finished modprobe@drm.service. Nov 1 00:53:05.249707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:53:05.249864 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:53:05.250933 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:53:05.251095 systemd[1]: Finished modprobe@loop.service. Nov 1 00:53:05.252304 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:53:05.252407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.254082 systemd[1]: Finished audit-rules.service. Nov 1 00:53:05.257186 systemd[1]: Finished ensure-sysext.service. Nov 1 00:53:05.300795 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:53:05.301561 systemd[1]: Reached target time-set.target. Nov 1 00:53:05.303158 systemd-resolved[1217]: Positive Trust Anchors: Nov 1 00:53:05.303173 systemd-resolved[1217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:53:05.303437 systemd-resolved[1217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:53:05.309922 systemd-resolved[1217]: Using system hostname 'ci-3510.3.8-n-0efaf8214b'. Nov 1 00:53:05.311924 systemd[1]: Started systemd-resolved.service. Nov 1 00:53:05.312667 systemd[1]: Reached target network.target. Nov 1 00:53:05.313291 systemd[1]: Reached target nss-lookup.target. Nov 1 00:53:05.313957 systemd[1]: Reached target sysinit.target. Nov 1 00:53:05.314637 systemd[1]: Started motdgen.path. Nov 1 00:53:05.315253 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:53:05.316157 systemd[1]: Started logrotate.timer. Nov 1 00:53:05.316913 systemd[1]: Started mdadm.timer. Nov 1 00:53:05.317508 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:53:05.318151 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:53:05.318185 systemd[1]: Reached target paths.target. Nov 1 00:53:05.318815 systemd[1]: Reached target timers.target. Nov 1 00:53:05.319714 systemd[1]: Listening on dbus.socket. Nov 1 00:53:05.321784 systemd[1]: Starting docker.socket... Nov 1 00:53:05.323886 systemd[1]: Listening on sshd.socket. Nov 1 00:53:05.324680 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:05.325019 systemd[1]: Listening on docker.socket. Nov 1 00:53:05.325843 systemd[1]: Reached target sockets.target. Nov 1 00:53:05.326573 systemd[1]: Reached target basic.target. Nov 1 00:53:05.327567 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:53:05.327692 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.327845 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:53:05.329038 systemd[1]: Starting containerd.service... Nov 1 00:53:05.330903 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:53:05.332774 systemd[1]: Starting dbus.service... Nov 1 00:53:05.336267 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:53:05.338296 systemd[1]: Starting extend-filesystems.service... Nov 1 00:53:05.339335 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:53:05.343759 systemd[1]: Starting motdgen.service... Nov 1 00:53:05.347412 systemd[1]: Starting prepare-helm.service... Nov 1 00:53:05.350395 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:53:05.350897 jq[1278]: false Nov 1 00:53:05.354445 systemd[1]: Starting sshd-keygen.service... Nov 1 00:53:05.363455 systemd[1]: Starting systemd-logind.service... Nov 1 00:53:05.364908 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:53:05.365039 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:53:05.367611 systemd[1]: Starting update-engine.service... Nov 1 00:53:05.371646 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:53:05.380536 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:53:05.383139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:53:05.399492 dbus-daemon[1275]: [system] SELinux support is enabled Nov 1 00:53:05.399668 systemd[1]: Started dbus.service. Nov 1 00:53:05.402436 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:53:05.402465 systemd[1]: Reached target system-config.target. Nov 1 00:53:05.403160 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:53:05.403182 systemd[1]: Reached target user-config.target. Nov 1 00:53:05.404766 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:53:05.405014 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:53:05.416295 jq[1294]: true Nov 1 00:53:05.430025 extend-filesystems[1279]: Found loop1 Nov 1 00:53:05.431447 jq[1307]: true Nov 1 00:53:05.431775 extend-filesystems[1279]: Found vda Nov 1 00:53:05.432550 extend-filesystems[1279]: Found vda1 Nov 1 00:53:05.437444 tar[1301]: linux-amd64/LICENSE Nov 1 00:53:05.437444 tar[1301]: linux-amd64/helm Nov 1 00:53:05.437976 extend-filesystems[1279]: Found vda2 Nov 1 00:53:05.438793 extend-filesystems[1279]: Found vda3 Nov 1 00:53:05.443592 extend-filesystems[1279]: Found usr Nov 1 00:53:05.443592 extend-filesystems[1279]: Found vda4 Nov 1 00:53:05.443592 extend-filesystems[1279]: Found vda6 Nov 1 00:53:05.443592 extend-filesystems[1279]: Found vda7 Nov 1 00:53:05.443592 extend-filesystems[1279]: Found vda9 Nov 1 00:53:05.443592 extend-filesystems[1279]: Checking size of /dev/vda9 Nov 1 00:53:05.448654 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:05.448686 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:53:05.449380 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:53:05.449638 systemd[1]: Finished motdgen.service. Nov 1 00:53:05.473004 systemd-networkd[1058]: eth1: Gained IPv6LL Nov 1 00:53:05.477080 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:53:05.478102 systemd[1]: Reached target network-online.target. Nov 1 00:53:05.480165 systemd[1]: Starting kubelet.service... Nov 1 00:53:05.489926 extend-filesystems[1279]: Resized partition /dev/vda9 Nov 1 00:53:05.539047 update_engine[1292]: I1101 00:53:05.538321 1292 main.cc:92] Flatcar Update Engine starting Nov 1 00:53:05.543097 systemd[1]: Started update-engine.service. Nov 1 00:53:05.543411 update_engine[1292]: I1101 00:53:05.543145 1292 update_check_scheduler.cc:74] Next update check in 7m26s Nov 1 00:53:05.545691 systemd[1]: Started locksmithd.service. Nov 1 00:53:05.548047 extend-filesystems[1325]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:53:05.560807 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 1 00:53:05.569394 systemd-timesyncd[1218]: Contacted time server 23.157.160.168:123 (0.flatcar.pool.ntp.org). Nov 1 00:53:05.569476 systemd-timesyncd[1218]: Initial clock synchronization to Sat 2025-11-01 00:53:05.369665 UTC. Nov 1 00:53:05.590035 coreos-metadata[1274]: Nov 01 00:53:05.589 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:53:05.601900 coreos-metadata[1274]: Nov 01 00:53:05.601 INFO Fetch successful Nov 1 00:53:05.613834 bash[1339]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:53:05.614871 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:53:05.626390 unknown[1274]: wrote ssh authorized keys file for user: core Nov 1 00:53:05.667581 update-ssh-keys[1343]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:53:05.668109 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:53:05.696577 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:53:05.698736 env[1305]: time="2025-11-01T00:53:05.697709479Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:53:05.700510 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:53:05.702133 systemd-logind[1290]: New seat seat0. Nov 1 00:53:05.707194 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 00:53:05.712780 systemd[1]: Started systemd-logind.service. Nov 1 00:53:05.731348 extend-filesystems[1325]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:53:05.731348 extend-filesystems[1325]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 00:53:05.731348 extend-filesystems[1325]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 00:53:05.741545 extend-filesystems[1279]: Resized filesystem in /dev/vda9 Nov 1 00:53:05.741545 extend-filesystems[1279]: Found vdb Nov 1 00:53:05.731637 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:53:05.731925 systemd[1]: Finished extend-filesystems.service. Nov 1 00:53:05.777419 env[1305]: time="2025-11-01T00:53:05.777349472Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:53:05.779287 env[1305]: time="2025-11-01T00:53:05.779255202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.782027 env[1305]: time="2025-11-01T00:53:05.781981882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:53:05.782185 env[1305]: time="2025-11-01T00:53:05.782150305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.782584 env[1305]: time="2025-11-01T00:53:05.782560617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:53:05.783791 env[1305]: time="2025-11-01T00:53:05.783739926Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.783902 env[1305]: time="2025-11-01T00:53:05.783872218Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:53:05.783970 env[1305]: time="2025-11-01T00:53:05.783953820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.784131 env[1305]: time="2025-11-01T00:53:05.784114336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.784465 env[1305]: time="2025-11-01T00:53:05.784444661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:53:05.785045 env[1305]: time="2025-11-01T00:53:05.785018973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:53:05.785935 env[1305]: time="2025-11-01T00:53:05.785910083Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:53:05.786361 env[1305]: time="2025-11-01T00:53:05.786337639Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:53:05.786833 env[1305]: time="2025-11-01T00:53:05.786739634Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:53:05.793021 systemd-networkd[1058]: eth0: Gained IPv6LL Nov 1 00:53:05.795799 env[1305]: time="2025-11-01T00:53:05.795722186Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:53:05.795914 env[1305]: time="2025-11-01T00:53:05.795895003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:53:05.795986 env[1305]: time="2025-11-01T00:53:05.795969677Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:53:05.796087 env[1305]: time="2025-11-01T00:53:05.796069537Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796164 env[1305]: time="2025-11-01T00:53:05.796145408Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796272 env[1305]: time="2025-11-01T00:53:05.796255771Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796370 env[1305]: time="2025-11-01T00:53:05.796328585Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796451 env[1305]: time="2025-11-01T00:53:05.796434341Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796519 env[1305]: time="2025-11-01T00:53:05.796503754Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796594 env[1305]: time="2025-11-01T00:53:05.796578271Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796671 env[1305]: time="2025-11-01T00:53:05.796655325Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.796773 env[1305]: time="2025-11-01T00:53:05.796733333Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:53:05.796957 env[1305]: time="2025-11-01T00:53:05.796938816Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:53:05.797115 env[1305]: time="2025-11-01T00:53:05.797098456Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:53:05.797526 env[1305]: time="2025-11-01T00:53:05.797505308Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:53:05.797617 env[1305]: time="2025-11-01T00:53:05.797600948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.797686 env[1305]: time="2025-11-01T00:53:05.797670727Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:53:05.797810 env[1305]: time="2025-11-01T00:53:05.797793768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.797895 env[1305]: time="2025-11-01T00:53:05.797879010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.797981 env[1305]: time="2025-11-01T00:53:05.797964744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798052 env[1305]: time="2025-11-01T00:53:05.798036625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798120 env[1305]: time="2025-11-01T00:53:05.798104332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798186 env[1305]: time="2025-11-01T00:53:05.798170762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798254 env[1305]: time="2025-11-01T00:53:05.798238367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798328 env[1305]: time="2025-11-01T00:53:05.798312478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798404 env[1305]: time="2025-11-01T00:53:05.798388700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:53:05.798856 env[1305]: time="2025-11-01T00:53:05.798819488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798923 env[1305]: time="2025-11-01T00:53:05.798862843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798923 env[1305]: time="2025-11-01T00:53:05.798885398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.798923 env[1305]: time="2025-11-01T00:53:05.798909062Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:53:05.799009 env[1305]: time="2025-11-01T00:53:05.798934095Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:53:05.799009 env[1305]: time="2025-11-01T00:53:05.798954634Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:53:05.799009 env[1305]: time="2025-11-01T00:53:05.798980611Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:53:05.799081 env[1305]: time="2025-11-01T00:53:05.799029071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:53:05.799461 env[1305]: time="2025-11-01T00:53:05.799328176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:53:05.802579 env[1305]: time="2025-11-01T00:53:05.799474126Z" level=info msg="Connect containerd service" Nov 1 00:53:05.802579 env[1305]: time="2025-11-01T00:53:05.799515294Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:53:05.802579 env[1305]: time="2025-11-01T00:53:05.800270032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:53:05.802579 env[1305]: time="2025-11-01T00:53:05.800583285Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:53:05.802579 env[1305]: time="2025-11-01T00:53:05.800623462Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:53:05.800794 systemd[1]: Started containerd.service. Nov 1 00:53:05.821515 env[1305]: time="2025-11-01T00:53:05.821456692Z" level=info msg="containerd successfully booted in 0.127964s" Nov 1 00:53:05.821515 env[1305]: time="2025-11-01T00:53:05.801422257Z" level=info msg="Start subscribing containerd event" Nov 1 00:53:05.824114 env[1305]: time="2025-11-01T00:53:05.824076866Z" level=info msg="Start recovering state" Nov 1 00:53:05.824208 env[1305]: time="2025-11-01T00:53:05.824192045Z" level=info msg="Start event monitor" Nov 1 00:53:05.824238 env[1305]: time="2025-11-01T00:53:05.824228132Z" level=info msg="Start snapshots syncer" Nov 1 00:53:05.824271 env[1305]: time="2025-11-01T00:53:05.824243243Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:53:05.824271 env[1305]: time="2025-11-01T00:53:05.824255863Z" level=info msg="Start streaming server" Nov 1 00:53:06.242120 sshd_keygen[1302]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:53:06.276344 systemd[1]: Finished sshd-keygen.service. Nov 1 00:53:06.279324 systemd[1]: Starting issuegen.service... Nov 1 00:53:06.294352 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:53:06.294591 systemd[1]: Finished issuegen.service. Nov 1 00:53:06.297044 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:53:06.311146 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:53:06.313604 systemd[1]: Started getty@tty1.service. Nov 1 00:53:06.316313 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:53:06.317867 systemd[1]: Reached target getty.target. Nov 1 00:53:06.531900 tar[1301]: linux-amd64/README.md Nov 1 00:53:06.540605 systemd[1]: Finished prepare-helm.service. Nov 1 00:53:06.551407 locksmithd[1336]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:53:07.100132 systemd[1]: Started kubelet.service. Nov 1 00:53:07.101514 systemd[1]: Reached target multi-user.target. Nov 1 00:53:07.104474 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:53:07.119915 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:53:07.120253 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:53:07.128112 systemd[1]: Startup finished in 5.973s (kernel) + 7.402s (userspace) = 13.376s. Nov 1 00:53:07.732316 kubelet[1382]: E1101 00:53:07.732271 1382 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:53:07.734225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:53:07.734408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:53:09.168167 systemd[1]: Created slice system-sshd.slice. Nov 1 00:53:09.169923 systemd[1]: Started sshd@0-144.126.212.254:22-139.178.89.65:44476.service. Nov 1 00:53:09.224835 sshd[1390]: Accepted publickey for core from 139.178.89.65 port 44476 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.227967 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.238202 systemd[1]: Created slice user-500.slice. Nov 1 00:53:09.239375 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:53:09.246885 systemd-logind[1290]: New session 1 of user core. Nov 1 00:53:09.251784 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:53:09.253363 systemd[1]: Starting user@500.service... Nov 1 00:53:09.259504 (systemd)[1395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.333461 systemd[1395]: Queued start job for default target default.target. Nov 1 00:53:09.334428 systemd[1395]: Reached target paths.target. Nov 1 00:53:09.334575 systemd[1395]: Reached target sockets.target. Nov 1 00:53:09.334667 systemd[1395]: Reached target timers.target. Nov 1 00:53:09.334781 systemd[1395]: Reached target basic.target. Nov 1 00:53:09.334999 systemd[1]: Started user@500.service. Nov 1 00:53:09.335995 systemd[1]: Started session-1.scope. Nov 1 00:53:09.337344 systemd[1395]: Reached target default.target. Nov 1 00:53:09.338127 systemd[1395]: Startup finished in 71ms. Nov 1 00:53:09.394731 systemd[1]: Started sshd@1-144.126.212.254:22-139.178.89.65:44492.service. Nov 1 00:53:09.443516 sshd[1404]: Accepted publickey for core from 139.178.89.65 port 44492 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.445989 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.450820 systemd-logind[1290]: New session 2 of user core. Nov 1 00:53:09.451631 systemd[1]: Started session-2.scope. Nov 1 00:53:09.512431 sshd[1404]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:09.516229 systemd[1]: sshd@1-144.126.212.254:22-139.178.89.65:44492.service: Deactivated successfully. Nov 1 00:53:09.517162 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:53:09.519017 systemd[1]: Started sshd@2-144.126.212.254:22-139.178.89.65:44508.service. Nov 1 00:53:09.519934 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:53:09.520592 systemd-logind[1290]: Removed session 2. Nov 1 00:53:09.568465 sshd[1411]: Accepted publickey for core from 139.178.89.65 port 44508 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.570299 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.575048 systemd[1]: Started session-3.scope. Nov 1 00:53:09.575613 systemd-logind[1290]: New session 3 of user core. Nov 1 00:53:09.633301 sshd[1411]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:09.637425 systemd[1]: sshd@2-144.126.212.254:22-139.178.89.65:44508.service: Deactivated successfully. Nov 1 00:53:09.639303 systemd[1]: Started sshd@3-144.126.212.254:22-139.178.89.65:44518.service. Nov 1 00:53:09.639661 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:53:09.642046 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:53:09.643401 systemd-logind[1290]: Removed session 3. Nov 1 00:53:09.694482 sshd[1418]: Accepted publickey for core from 139.178.89.65 port 44518 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.696148 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.701026 systemd[1]: Started session-4.scope. Nov 1 00:53:09.701252 systemd-logind[1290]: New session 4 of user core. Nov 1 00:53:09.764780 sshd[1418]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:09.767772 systemd[1]: sshd@3-144.126.212.254:22-139.178.89.65:44518.service: Deactivated successfully. Nov 1 00:53:09.769569 systemd[1]: Started sshd@4-144.126.212.254:22-139.178.89.65:44524.service. Nov 1 00:53:09.771577 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:53:09.771976 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:53:09.773704 systemd-logind[1290]: Removed session 4. Nov 1 00:53:09.820708 sshd[1425]: Accepted publickey for core from 139.178.89.65 port 44524 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.822495 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.827299 systemd[1]: Started session-5.scope. Nov 1 00:53:09.827624 systemd-logind[1290]: New session 5 of user core. Nov 1 00:53:09.892581 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:53:09.893211 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:53:09.900923 dbus-daemon[1275]: \xd0\xfd\xf4o\xe0U: received setenforce notice (enforcing=805790528) Nov 1 00:53:09.902673 sudo[1429]: pam_unix(sudo:session): session closed for user root Nov 1 00:53:09.907968 sshd[1425]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:09.911744 systemd[1]: Started sshd@5-144.126.212.254:22-139.178.89.65:44530.service. Nov 1 00:53:09.913268 systemd[1]: sshd@4-144.126.212.254:22-139.178.89.65:44524.service: Deactivated successfully. Nov 1 00:53:09.914717 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:53:09.914981 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:53:09.917411 systemd-logind[1290]: Removed session 5. Nov 1 00:53:09.963836 sshd[1431]: Accepted publickey for core from 139.178.89.65 port 44530 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:09.965550 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:09.970153 systemd[1]: Started session-6.scope. Nov 1 00:53:09.970822 systemd-logind[1290]: New session 6 of user core. Nov 1 00:53:10.030440 sudo[1438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:53:10.031068 sudo[1438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:53:10.034356 sudo[1438]: pam_unix(sudo:session): session closed for user root Nov 1 00:53:10.039672 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:53:10.039953 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:53:10.049806 systemd[1]: Stopping audit-rules.service... Nov 1 00:53:10.049000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:53:10.051933 auditctl[1441]: No rules Nov 1 00:53:10.052965 kernel: kauditd_printk_skb: 34 callbacks suppressed Nov 1 00:53:10.053016 kernel: audit: type=1305 audit(1761958390.049:173): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:53:10.054018 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:53:10.054221 systemd[1]: Stopped audit-rules.service. Nov 1 00:53:10.057345 systemd[1]: Starting audit-rules.service... Nov 1 00:53:10.049000 audit[1441]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef8655bb0 a2=420 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.072772 kernel: audit: type=1300 audit(1761958390.049:173): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef8655bb0 a2=420 a3=0 items=0 ppid=1 pid=1441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.072831 kernel: audit: type=1327 audit(1761958390.049:173): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:53:10.049000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:53:10.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.079781 kernel: audit: type=1131 audit(1761958390.052:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.091855 augenrules[1459]: No rules Nov 1 00:53:10.092745 systemd[1]: Finished audit-rules.service. Nov 1 00:53:10.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.099335 sudo[1437]: pam_unix(sudo:session): session closed for user root Nov 1 00:53:10.099764 kernel: audit: type=1130 audit(1761958390.092:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.098000 audit[1437]: USER_END pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.098000 audit[1437]: CRED_DISP pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.113413 kernel: audit: type=1106 audit(1761958390.098:176): pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.113470 kernel: audit: type=1104 audit(1761958390.098:177): pid=1437 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.113600 sshd[1431]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:10.116543 systemd[1]: Started sshd@6-144.126.212.254:22-139.178.89.65:44546.service. Nov 1 00:53:10.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.254:22-139.178.89.65:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.126997 kernel: audit: type=1130 audit(1761958390.114:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.254:22-139.178.89.65:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.127037 systemd[1]: sshd@5-144.126.212.254:22-139.178.89.65:44530.service: Deactivated successfully. Nov 1 00:53:10.124000 audit[1431]: USER_END pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.132216 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:53:10.136157 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:53:10.136800 kernel: audit: type=1106 audit(1761958390.124:179): pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.137058 systemd-logind[1290]: Removed session 6. Nov 1 00:53:10.124000 audit[1431]: CRED_DISP pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-144.126.212.254:22-139.178.89.65:44530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.144840 kernel: audit: type=1104 audit(1761958390.124:180): pid=1431 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.167000 audit[1464]: USER_ACCT pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.168834 sshd[1464]: Accepted publickey for core from 139.178.89.65 port 44546 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:53:10.169000 audit[1464]: CRED_ACQ pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.169000 audit[1464]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1c91cb90 a2=3 a3=0 items=0 ppid=1 pid=1464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.169000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:53:10.170846 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:53:10.175799 systemd[1]: Started session-7.scope. Nov 1 00:53:10.176141 systemd-logind[1290]: New session 7 of user core. Nov 1 00:53:10.179000 audit[1464]: USER_START pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.180000 audit[1469]: CRED_ACQ pid=1469 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:10.232000 audit[1470]: USER_ACCT pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.234143 sudo[1470]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:53:10.234514 sudo[1470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:53:10.233000 audit[1470]: CRED_REFR pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.235000 audit[1470]: USER_START pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.259499 systemd[1]: Starting docker.service... Nov 1 00:53:10.305200 env[1480]: time="2025-11-01T00:53:10.305128584Z" level=info msg="Starting up" Nov 1 00:53:10.306435 env[1480]: time="2025-11-01T00:53:10.306400910Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:53:10.306435 env[1480]: time="2025-11-01T00:53:10.306426086Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:53:10.306517 env[1480]: time="2025-11-01T00:53:10.306445101Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:53:10.306517 env[1480]: time="2025-11-01T00:53:10.306455824Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:53:10.308468 env[1480]: time="2025-11-01T00:53:10.308440975Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:53:10.308468 env[1480]: time="2025-11-01T00:53:10.308462402Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:53:10.308542 env[1480]: time="2025-11-01T00:53:10.308477241Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:53:10.308542 env[1480]: time="2025-11-01T00:53:10.308485693Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:53:10.315027 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3849599619-merged.mount: Deactivated successfully. Nov 1 00:53:10.360101 env[1480]: time="2025-11-01T00:53:10.360051314Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:53:10.360101 env[1480]: time="2025-11-01T00:53:10.360079862Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:53:10.360341 env[1480]: time="2025-11-01T00:53:10.360300245Z" level=info msg="Loading containers: start." Nov 1 00:53:10.434000 audit[1510]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.434000 audit[1510]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd1412a270 a2=0 a3=7ffd1412a25c items=0 ppid=1480 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.434000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:53:10.437000 audit[1512]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.437000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe563b3f40 a2=0 a3=7ffe563b3f2c items=0 ppid=1480 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:53:10.439000 audit[1514]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.439000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffff1797da0 a2=0 a3=7ffff1797d8c items=0 ppid=1480 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.439000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:53:10.441000 audit[1516]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.441000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd816cd5d0 a2=0 a3=7ffd816cd5bc items=0 ppid=1480 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.441000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:53:10.444000 audit[1518]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.444000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc051356f0 a2=0 a3=7ffc051356dc items=0 ppid=1480 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.444000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:53:10.466000 audit[1523]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.466000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff0a9127a0 a2=0 a3=7fff0a91278c items=0 ppid=1480 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:53:10.472000 audit[1525]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.472000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe58db1510 a2=0 a3=7ffe58db14fc items=0 ppid=1480 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.472000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:53:10.474000 audit[1527]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.474000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffce62590b0 a2=0 a3=7ffce625909c items=0 ppid=1480 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.474000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:53:10.477000 audit[1529]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.477000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc73bd6e00 a2=0 a3=7ffc73bd6dec items=0 ppid=1480 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.477000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:53:10.483000 audit[1533]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.483000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeac8be340 a2=0 a3=7ffeac8be32c items=0 ppid=1480 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.483000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:53:10.487000 audit[1534]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.487000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff5696cbb0 a2=0 a3=7fff5696cb9c items=0 ppid=1480 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.487000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:53:10.500780 kernel: Initializing XFRM netlink socket Nov 1 00:53:10.541960 env[1480]: time="2025-11-01T00:53:10.541924312Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:53:10.575000 audit[1542]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.575000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd332cb6a0 a2=0 a3=7ffd332cb68c items=0 ppid=1480 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.575000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:53:10.587000 audit[1546]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.587000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe93e8edb0 a2=0 a3=7ffe93e8ed9c items=0 ppid=1480 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.587000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:53:10.591000 audit[1549]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.591000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffebfcb5f20 a2=0 a3=7ffebfcb5f0c items=0 ppid=1480 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.591000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:53:10.593000 audit[1551]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.593000 audit[1551]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc878eca20 a2=0 a3=7ffc878eca0c items=0 ppid=1480 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.593000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:53:10.597000 audit[1553]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.597000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc3ad80590 a2=0 a3=7ffc3ad8057c items=0 ppid=1480 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.597000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:53:10.600000 audit[1555]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.600000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc7de08ec0 a2=0 a3=7ffc7de08eac items=0 ppid=1480 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.600000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:53:10.603000 audit[1557]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.603000 audit[1557]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe7d0233e0 a2=0 a3=7ffe7d0233cc items=0 ppid=1480 pid=1557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.603000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:53:10.612000 audit[1560]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.612000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffe334ca50 a2=0 a3=7fffe334ca3c items=0 ppid=1480 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.612000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:53:10.615000 audit[1562]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.615000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff4e758770 a2=0 a3=7fff4e75875c items=0 ppid=1480 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.615000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:53:10.618000 audit[1564]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.618000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffefe422230 a2=0 a3=7ffefe42221c items=0 ppid=1480 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.618000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:53:10.620000 audit[1566]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.620000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5ec9ad20 a2=0 a3=7fff5ec9ad0c items=0 ppid=1480 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.620000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:53:10.621686 systemd-networkd[1058]: docker0: Link UP Nov 1 00:53:10.629000 audit[1570]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.629000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9f2a1570 a2=0 a3=7ffd9f2a155c items=0 ppid=1480 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.629000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:53:10.634000 audit[1571]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:10.634000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff9cdc6ce0 a2=0 a3=7fff9cdc6ccc items=0 ppid=1480 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:10.634000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:53:10.635348 env[1480]: time="2025-11-01T00:53:10.635318127Z" level=info msg="Loading containers: done." Nov 1 00:53:10.654539 env[1480]: time="2025-11-01T00:53:10.654500659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:53:10.654945 env[1480]: time="2025-11-01T00:53:10.654919228Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:53:10.655125 env[1480]: time="2025-11-01T00:53:10.655109193Z" level=info msg="Daemon has completed initialization" Nov 1 00:53:10.668222 systemd[1]: Started docker.service. Nov 1 00:53:10.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:10.672610 env[1480]: time="2025-11-01T00:53:10.672548022Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:53:10.695283 systemd[1]: Starting coreos-metadata.service... Nov 1 00:53:10.737019 coreos-metadata[1597]: Nov 01 00:53:10.736 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 1 00:53:10.750074 coreos-metadata[1597]: Nov 01 00:53:10.749 INFO Fetch successful Nov 1 00:53:10.762972 systemd[1]: Finished coreos-metadata.service. Nov 1 00:53:10.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:11.684809 env[1305]: time="2025-11-01T00:53:11.684768537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:53:12.219407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891148504.mount: Deactivated successfully. Nov 1 00:53:13.952682 env[1305]: time="2025-11-01T00:53:13.952621528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:13.954236 env[1305]: time="2025-11-01T00:53:13.954198239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:13.956036 env[1305]: time="2025-11-01T00:53:13.956005024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:13.957804 env[1305]: time="2025-11-01T00:53:13.957774181Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:13.958864 env[1305]: time="2025-11-01T00:53:13.958833332Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:53:13.959515 env[1305]: time="2025-11-01T00:53:13.959490103Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:53:15.723478 env[1305]: time="2025-11-01T00:53:15.723418086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:15.727348 env[1305]: time="2025-11-01T00:53:15.727299435Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:15.730641 env[1305]: time="2025-11-01T00:53:15.730605469Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:15.734507 env[1305]: time="2025-11-01T00:53:15.734465630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:15.736093 env[1305]: time="2025-11-01T00:53:15.736040329Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:53:15.737068 env[1305]: time="2025-11-01T00:53:15.737037310Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:53:17.356926 env[1305]: time="2025-11-01T00:53:17.356877302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:17.359143 env[1305]: time="2025-11-01T00:53:17.359104852Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:17.360814 env[1305]: time="2025-11-01T00:53:17.360787197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:17.362469 env[1305]: time="2025-11-01T00:53:17.362440688Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:17.363296 env[1305]: time="2025-11-01T00:53:17.363269463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:53:17.364453 env[1305]: time="2025-11-01T00:53:17.364423732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:53:17.995924 kernel: kauditd_printk_skb: 85 callbacks suppressed Nov 1 00:53:17.996046 kernel: audit: type=1130 audit(1761958397.984:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:17.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:17.985343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:53:17.985553 systemd[1]: Stopped kubelet.service. Nov 1 00:53:17.987463 systemd[1]: Starting kubelet.service... Nov 1 00:53:18.005460 kernel: audit: type=1131 audit(1761958397.984:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:17.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:18.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:18.125588 systemd[1]: Started kubelet.service. Nov 1 00:53:18.133004 kernel: audit: type=1130 audit(1761958398.125:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:18.230240 kubelet[1627]: E1101 00:53:18.230185 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:53:18.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:18.233471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:53:18.233636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:53:18.240878 kernel: audit: type=1131 audit(1761958398.232:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:18.739137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount88596899.mount: Deactivated successfully. Nov 1 00:53:19.642392 env[1305]: time="2025-11-01T00:53:19.642338791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:19.643683 env[1305]: time="2025-11-01T00:53:19.643650003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:19.645774 env[1305]: time="2025-11-01T00:53:19.645729816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:19.647358 env[1305]: time="2025-11-01T00:53:19.647326777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:19.647980 env[1305]: time="2025-11-01T00:53:19.647941448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:53:19.649018 env[1305]: time="2025-11-01T00:53:19.648986222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:53:20.215041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1817579946.mount: Deactivated successfully. Nov 1 00:53:21.335664 env[1305]: time="2025-11-01T00:53:21.335594386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.337102 env[1305]: time="2025-11-01T00:53:21.337063537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.338899 env[1305]: time="2025-11-01T00:53:21.338870570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.340447 env[1305]: time="2025-11-01T00:53:21.340418015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.341324 env[1305]: time="2025-11-01T00:53:21.341290885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:53:21.341948 env[1305]: time="2025-11-01T00:53:21.341924145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:53:21.756922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617516325.mount: Deactivated successfully. Nov 1 00:53:21.764455 env[1305]: time="2025-11-01T00:53:21.764413942Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.766678 env[1305]: time="2025-11-01T00:53:21.766641176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.767951 env[1305]: time="2025-11-01T00:53:21.767923235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.769061 env[1305]: time="2025-11-01T00:53:21.769026126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:21.769682 env[1305]: time="2025-11-01T00:53:21.769650477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:53:21.770369 env[1305]: time="2025-11-01T00:53:21.770344841Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:53:22.338367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2755305656.mount: Deactivated successfully. Nov 1 00:53:25.003370 env[1305]: time="2025-11-01T00:53:25.003315705Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:25.005503 env[1305]: time="2025-11-01T00:53:25.005451735Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:25.007272 env[1305]: time="2025-11-01T00:53:25.007239570Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:25.009150 env[1305]: time="2025-11-01T00:53:25.009113950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:25.010829 env[1305]: time="2025-11-01T00:53:25.010795076Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:53:28.479919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:53:28.486773 kernel: audit: type=1130 audit(1761958408.479:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:28.486883 kernel: audit: type=1131 audit(1761958408.481:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:28.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:28.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:28.480572 systemd[1]: Stopped kubelet.service. Nov 1 00:53:28.487678 systemd[1]: Starting kubelet.service... Nov 1 00:53:28.505661 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:53:28.505785 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:53:28.506062 systemd[1]: Stopped kubelet.service. Nov 1 00:53:28.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:28.512780 kernel: audit: type=1130 audit(1761958408.505:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:28.513154 systemd[1]: Starting kubelet.service... Nov 1 00:53:28.543736 systemd[1]: Reloading. Nov 1 00:53:28.657477 /usr/lib/systemd/system-generators/torcx-generator[1684]: time="2025-11-01T00:53:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:53:28.660830 /usr/lib/systemd/system-generators/torcx-generator[1684]: time="2025-11-01T00:53:28Z" level=info msg="torcx already run" Nov 1 00:53:28.757470 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:53:28.757496 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:53:28.778354 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:53:28.864401 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:53:28.864635 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:53:28.865015 systemd[1]: Stopped kubelet.service. Nov 1 00:53:28.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:28.866722 systemd[1]: Starting kubelet.service... Nov 1 00:53:28.871806 kernel: audit: type=1130 audit(1761958408.864:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:53:28.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:28.980576 systemd[1]: Started kubelet.service. Nov 1 00:53:28.987925 kernel: audit: type=1130 audit(1761958408.979:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:29.065070 kubelet[1750]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:53:29.065070 kubelet[1750]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:53:29.065070 kubelet[1750]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:53:29.065070 kubelet[1750]: I1101 00:53:29.064705 1750 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:53:29.326214 kubelet[1750]: I1101 00:53:29.325813 1750 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:53:29.326214 kubelet[1750]: I1101 00:53:29.325844 1750 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:53:29.326214 kubelet[1750]: I1101 00:53:29.326113 1750 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:53:29.349449 kubelet[1750]: E1101 00:53:29.349406 1750 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://144.126.212.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:29.350875 kubelet[1750]: I1101 00:53:29.350847 1750 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:53:29.360483 kubelet[1750]: E1101 00:53:29.360453 1750 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:53:29.360483 kubelet[1750]: I1101 00:53:29.360480 1750 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:53:29.363532 kubelet[1750]: I1101 00:53:29.363508 1750 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:53:29.365135 kubelet[1750]: I1101 00:53:29.365088 1750 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:53:29.365310 kubelet[1750]: I1101 00:53:29.365134 1750 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0efaf8214b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:53:29.365426 kubelet[1750]: I1101 00:53:29.365314 1750 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:53:29.365426 kubelet[1750]: I1101 00:53:29.365324 1750 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:53:29.365488 kubelet[1750]: I1101 00:53:29.365435 1750 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:53:29.368572 kubelet[1750]: I1101 00:53:29.368548 1750 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:53:29.368572 kubelet[1750]: I1101 00:53:29.368576 1750 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:53:29.368684 kubelet[1750]: I1101 00:53:29.368594 1750 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:53:29.368684 kubelet[1750]: I1101 00:53:29.368607 1750 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:53:29.378269 kubelet[1750]: I1101 00:53:29.378242 1750 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:53:29.378647 kubelet[1750]: I1101 00:53:29.378622 1750 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:53:29.379238 kubelet[1750]: W1101 00:53:29.379214 1750 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:53:29.383187 kubelet[1750]: I1101 00:53:29.383153 1750 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:53:29.383269 kubelet[1750]: I1101 00:53:29.383205 1750 server.go:1287] "Started kubelet" Nov 1 00:53:29.383972 kubelet[1750]: W1101 00:53:29.383345 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.212.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0efaf8214b&limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:29.383972 kubelet[1750]: E1101 00:53:29.383410 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://144.126.212.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0efaf8214b&limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:29.383972 kubelet[1750]: W1101 00:53:29.383480 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.212.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:29.383972 kubelet[1750]: E1101 00:53:29.383507 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://144.126.212.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:29.404000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:29.406875 kubelet[1750]: I1101 00:53:29.406843 1750 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:53:29.407066 kubelet[1750]: I1101 00:53:29.407048 1750 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:53:29.407263 kubelet[1750]: I1101 00:53:29.407247 1750 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:53:29.404000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:29.414451 kernel: audit: type=1400 audit(1761958409.404:225): avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:29.414558 kernel: audit: type=1401 audit(1761958409.404:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:29.404000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009d3e60 a1=c0005fdf08 a2=c0009d3e30 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:29.431818 kernel: audit: type=1300 audit(1761958409.404:225): arch=c000003e syscall=188 success=no exit=-22 a0=c0009d3e60 a1=c0005fdf08 a2=c0009d3e30 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.431948 kernel: audit: type=1327 audit(1761958409.404:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:29.432002 kernel: audit: type=1400 audit(1761958409.406:226): avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:29.406000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:29.406000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:29.406000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a72540 a1=c0005fdf20 a2=c0009d3ef0 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.406000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:29.438915 kubelet[1750]: I1101 00:53:29.438851 1750 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:53:29.440849 kubelet[1750]: I1101 00:53:29.440819 1750 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:53:29.441000 audit[1761]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.441000 audit[1761]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd21e16ef0 a2=0 a3=7ffd21e16edc items=0 ppid=1750 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.441000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:53:29.442000 audit[1762]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1762 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.442000 audit[1762]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8fd27d10 a2=0 a3=7ffc8fd27cfc items=0 ppid=1750 pid=1762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.442000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:53:29.445225 kubelet[1750]: I1101 00:53:29.445164 1750 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:53:29.445427 kubelet[1750]: I1101 00:53:29.445407 1750 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:53:29.445812 kubelet[1750]: I1101 00:53:29.445790 1750 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:53:29.445000 audit[1764]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.445000 audit[1764]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc32ce0fb0 a2=0 a3=7ffc32ce0f9c items=0 ppid=1750 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.445000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:53:29.447690 kubelet[1750]: I1101 00:53:29.447663 1750 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:53:29.447982 kubelet[1750]: E1101 00:53:29.447951 1750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:29.448502 kubelet[1750]: I1101 00:53:29.448481 1750 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:53:29.448571 kubelet[1750]: I1101 00:53:29.448546 1750 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:53:29.448938 kubelet[1750]: E1101 00:53:29.417287 1750 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.212.254:6443/api/v1/namespaces/default/events\": dial tcp 144.126.212.254:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-0efaf8214b.1873bbd68c4bd9c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-0efaf8214b,UID:ci-3510.3.8-n-0efaf8214b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-0efaf8214b,},FirstTimestamp:2025-11-01 00:53:29.383172545 +0000 UTC m=+0.397702569,LastTimestamp:2025-11-01 00:53:29.383172545 +0000 UTC m=+0.397702569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-0efaf8214b,}" Nov 1 00:53:29.450388 kubelet[1750]: W1101 00:53:29.450338 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.212.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:29.450478 kubelet[1750]: E1101 00:53:29.450400 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://144.126.212.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:29.450517 kubelet[1750]: E1101 00:53:29.450468 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0efaf8214b?timeout=10s\": dial tcp 144.126.212.254:6443: connect: connection refused" interval="200ms" Nov 1 00:53:29.450860 kubelet[1750]: I1101 00:53:29.450837 1750 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:53:29.450945 kubelet[1750]: I1101 00:53:29.450914 1750 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:53:29.451719 kubelet[1750]: E1101 00:53:29.451659 1750 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:53:29.451993 kubelet[1750]: I1101 00:53:29.451971 1750 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:53:29.451000 audit[1766]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1766 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.451000 audit[1766]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd71a32110 a2=0 a3=7ffd71a320fc items=0 ppid=1750 pid=1766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:53:29.465000 audit[1769]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1769 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.465000 audit[1769]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc55edf4d0 a2=0 a3=7ffc55edf4bc items=0 ppid=1750 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:53:29.466590 kubelet[1750]: I1101 00:53:29.466548 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:53:29.467000 audit[1771]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:29.467000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffdf68d5d0 a2=0 a3=7fffdf68d5bc items=0 ppid=1750 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:53:29.468295 kubelet[1750]: I1101 00:53:29.468274 1750 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:53:29.468375 kubelet[1750]: I1101 00:53:29.468363 1750 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:53:29.468469 kubelet[1750]: I1101 00:53:29.468455 1750 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:53:29.468531 kubelet[1750]: I1101 00:53:29.468519 1750 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:53:29.468649 kubelet[1750]: E1101 00:53:29.468630 1750 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:53:29.469000 audit[1772]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1772 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.469000 audit[1772]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5a52e3c0 a2=0 a3=7fff5a52e3ac items=0 ppid=1750 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.469000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:53:29.470000 audit[1773]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.470000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff300484b0 a2=0 a3=7fff3004849c items=0 ppid=1750 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:53:29.471000 audit[1774]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:29.471000 audit[1774]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe179ce640 a2=0 a3=7ffe179ce62c items=0 ppid=1750 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:53:29.472000 audit[1775]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:29.472000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb62d4060 a2=0 a3=7ffeb62d404c items=0 ppid=1750 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:53:29.474000 audit[1776]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1776 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:29.474000 audit[1776]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffde1c9f4c0 a2=0 a3=7ffde1c9f4ac items=0 ppid=1750 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:53:29.475000 audit[1777]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:29.475000 audit[1777]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff92efcda0 a2=0 a3=7fff92efcd8c items=0 ppid=1750 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:53:29.478740 kubelet[1750]: W1101 00:53:29.478450 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.212.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:29.478740 kubelet[1750]: E1101 00:53:29.478521 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://144.126.212.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:29.483115 kubelet[1750]: I1101 00:53:29.483081 1750 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:53:29.483115 kubelet[1750]: I1101 00:53:29.483100 1750 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:53:29.483115 kubelet[1750]: I1101 00:53:29.483116 1750 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:53:29.502890 kubelet[1750]: I1101 00:53:29.502839 1750 policy_none.go:49] "None policy: Start" Nov 1 00:53:29.502890 kubelet[1750]: I1101 00:53:29.502885 1750 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:53:29.502890 kubelet[1750]: I1101 00:53:29.502902 1750 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:53:29.510250 kubelet[1750]: I1101 00:53:29.510216 1750 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:53:29.509000 audit[1750]: AVC avc: denied { mac_admin } for pid=1750 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:29.509000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:29.509000 audit[1750]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e5eed0 a1=c000e52d68 a2=c000e5eea0 a3=25 items=0 ppid=1 pid=1750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:29.509000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:29.510588 kubelet[1750]: I1101 00:53:29.510312 1750 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:53:29.510588 kubelet[1750]: I1101 00:53:29.510424 1750 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:53:29.510588 kubelet[1750]: I1101 00:53:29.510436 1750 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:53:29.511408 kubelet[1750]: I1101 00:53:29.511386 1750 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:53:29.512016 kubelet[1750]: E1101 00:53:29.511994 1750 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:53:29.512106 kubelet[1750]: E1101 00:53:29.512033 1750 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:29.573890 kubelet[1750]: E1101 00:53:29.573856 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.578802 kubelet[1750]: E1101 00:53:29.576632 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.579009 kubelet[1750]: E1101 00:53:29.578989 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.611906 kubelet[1750]: I1101 00:53:29.611871 1750 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.612306 kubelet[1750]: E1101 00:53:29.612274 1750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.254:6443/api/v1/nodes\": dial tcp 144.126.212.254:6443: connect: connection refused" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.649861 kubelet[1750]: I1101 00:53:29.649811 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5206b48dfbc668a622443f679b5b5d1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0efaf8214b\" (UID: \"d5206b48dfbc668a622443f679b5b5d1\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.649861 kubelet[1750]: I1101 00:53:29.649856 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650071 kubelet[1750]: I1101 00:53:29.649886 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650071 kubelet[1750]: I1101 00:53:29.649916 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650071 kubelet[1750]: I1101 00:53:29.649954 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650071 kubelet[1750]: I1101 00:53:29.649992 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650071 kubelet[1750]: I1101 00:53:29.650020 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650229 kubelet[1750]: I1101 00:53:29.650047 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.650229 kubelet[1750]: I1101 00:53:29.650075 1750 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.651240 kubelet[1750]: E1101 00:53:29.651208 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0efaf8214b?timeout=10s\": dial tcp 144.126.212.254:6443: connect: connection refused" interval="400ms" Nov 1 00:53:29.814040 kubelet[1750]: I1101 00:53:29.814011 1750 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.814602 kubelet[1750]: E1101 00:53:29.814558 1750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.254:6443/api/v1/nodes\": dial tcp 144.126.212.254:6443: connect: connection refused" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:29.875380 kubelet[1750]: E1101 00:53:29.875263 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:29.876258 env[1305]: time="2025-11-01T00:53:29.876207881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0efaf8214b,Uid:a1cf501193079699a0d2938452a031d9,Namespace:kube-system,Attempt:0,}" Nov 1 00:53:29.878765 kubelet[1750]: E1101 00:53:29.877852 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:29.878912 env[1305]: time="2025-11-01T00:53:29.878213246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0efaf8214b,Uid:9ca7c5f932118657f12430d70fb881c8,Namespace:kube-system,Attempt:0,}" Nov 1 00:53:29.879826 kubelet[1750]: E1101 00:53:29.879391 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:29.880375 env[1305]: time="2025-11-01T00:53:29.880186089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0efaf8214b,Uid:d5206b48dfbc668a622443f679b5b5d1,Namespace:kube-system,Attempt:0,}" Nov 1 00:53:30.052067 kubelet[1750]: E1101 00:53:30.052029 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0efaf8214b?timeout=10s\": dial tcp 144.126.212.254:6443: connect: connection refused" interval="800ms" Nov 1 00:53:30.216181 kubelet[1750]: I1101 00:53:30.216087 1750 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:30.217111 kubelet[1750]: E1101 00:53:30.217080 1750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.254:6443/api/v1/nodes\": dial tcp 144.126.212.254:6443: connect: connection refused" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:30.354797 kubelet[1750]: W1101 00:53:30.354709 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.212.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:30.355029 kubelet[1750]: E1101 00:53:30.354988 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://144.126.212.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:30.466811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount60640084.mount: Deactivated successfully. Nov 1 00:53:30.473200 env[1305]: time="2025-11-01T00:53:30.473147394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.474021 env[1305]: time="2025-11-01T00:53:30.473991338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.475595 env[1305]: time="2025-11-01T00:53:30.475567014Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.477138 env[1305]: time="2025-11-01T00:53:30.477103394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.477892 env[1305]: time="2025-11-01T00:53:30.477870280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.480508 env[1305]: time="2025-11-01T00:53:30.480477891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.490647 env[1305]: time="2025-11-01T00:53:30.490574146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.491482 env[1305]: time="2025-11-01T00:53:30.491459078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.492381 env[1305]: time="2025-11-01T00:53:30.492353975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.492925 env[1305]: time="2025-11-01T00:53:30.492901885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.496970 env[1305]: time="2025-11-01T00:53:30.496788430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.497810 env[1305]: time="2025-11-01T00:53:30.497740009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:30.513585 kubelet[1750]: W1101 00:53:30.510123 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.212.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0efaf8214b&limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:30.513585 kubelet[1750]: E1101 00:53:30.510239 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://144.126.212.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-0efaf8214b&limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:30.519404 env[1305]: time="2025-11-01T00:53:30.519309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:30.519525 env[1305]: time="2025-11-01T00:53:30.519422974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:30.519525 env[1305]: time="2025-11-01T00:53:30.519447781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:30.519682 env[1305]: time="2025-11-01T00:53:30.519651923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/115bcd035e81347b964fbf855bb9d882a003c9cd5082ac25857559d694cc0f42 pid=1787 runtime=io.containerd.runc.v2 Nov 1 00:53:30.526497 env[1305]: time="2025-11-01T00:53:30.526371267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:30.526810 env[1305]: time="2025-11-01T00:53:30.526682930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:30.526985 env[1305]: time="2025-11-01T00:53:30.526945818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:30.527371 env[1305]: time="2025-11-01T00:53:30.527316647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d87105f4979f557925d2dea6f737df7e65ac1a8a85b2db62fa6616c6d8dd44e0 pid=1811 runtime=io.containerd.runc.v2 Nov 1 00:53:30.532802 env[1305]: time="2025-11-01T00:53:30.532712040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:30.532802 env[1305]: time="2025-11-01T00:53:30.532778253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:30.533020 env[1305]: time="2025-11-01T00:53:30.532982060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:30.536993 env[1305]: time="2025-11-01T00:53:30.536893189Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a4996aea3db08227dc9a1bfb44d9605857072497933e3e5eeb9f0a8cbf86432 pid=1818 runtime=io.containerd.runc.v2 Nov 1 00:53:30.624878 env[1305]: time="2025-11-01T00:53:30.624834058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-0efaf8214b,Uid:a1cf501193079699a0d2938452a031d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"115bcd035e81347b964fbf855bb9d882a003c9cd5082ac25857559d694cc0f42\"" Nov 1 00:53:30.626392 kubelet[1750]: E1101 00:53:30.626168 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:30.628243 env[1305]: time="2025-11-01T00:53:30.628212206Z" level=info msg="CreateContainer within sandbox \"115bcd035e81347b964fbf855bb9d882a003c9cd5082ac25857559d694cc0f42\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:53:30.651822 env[1305]: time="2025-11-01T00:53:30.651732167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-0efaf8214b,Uid:9ca7c5f932118657f12430d70fb881c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a4996aea3db08227dc9a1bfb44d9605857072497933e3e5eeb9f0a8cbf86432\"" Nov 1 00:53:30.652783 kubelet[1750]: E1101 00:53:30.652665 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:30.654477 env[1305]: time="2025-11-01T00:53:30.654447833Z" level=info msg="CreateContainer within sandbox \"2a4996aea3db08227dc9a1bfb44d9605857072497933e3e5eeb9f0a8cbf86432\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:53:30.661116 env[1305]: time="2025-11-01T00:53:30.661082020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-0efaf8214b,Uid:d5206b48dfbc668a622443f679b5b5d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d87105f4979f557925d2dea6f737df7e65ac1a8a85b2db62fa6616c6d8dd44e0\"" Nov 1 00:53:30.661490 env[1305]: time="2025-11-01T00:53:30.661254559Z" level=info msg="CreateContainer within sandbox \"115bcd035e81347b964fbf855bb9d882a003c9cd5082ac25857559d694cc0f42\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6c1b6dc43ba24eadd4951d3795452fc910eaf4bbb19983f0c657d621db62b23e\"" Nov 1 00:53:30.661869 env[1305]: time="2025-11-01T00:53:30.661842460Z" level=info msg="StartContainer for \"6c1b6dc43ba24eadd4951d3795452fc910eaf4bbb19983f0c657d621db62b23e\"" Nov 1 00:53:30.662433 kubelet[1750]: E1101 00:53:30.662296 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:30.664786 env[1305]: time="2025-11-01T00:53:30.664734578Z" level=info msg="CreateContainer within sandbox \"d87105f4979f557925d2dea6f737df7e65ac1a8a85b2db62fa6616c6d8dd44e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:53:30.671561 env[1305]: time="2025-11-01T00:53:30.671529613Z" level=info msg="CreateContainer within sandbox \"2a4996aea3db08227dc9a1bfb44d9605857072497933e3e5eeb9f0a8cbf86432\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c2d2696a67e47173d6b803232ef154f5f637eeb5800bde228ee04a8136d65551\"" Nov 1 00:53:30.672127 env[1305]: time="2025-11-01T00:53:30.672103648Z" level=info msg="StartContainer for \"c2d2696a67e47173d6b803232ef154f5f637eeb5800bde228ee04a8136d65551\"" Nov 1 00:53:30.676852 env[1305]: time="2025-11-01T00:53:30.676805331Z" level=info msg="CreateContainer within sandbox \"d87105f4979f557925d2dea6f737df7e65ac1a8a85b2db62fa6616c6d8dd44e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d9ee1858d2edeccdc696c7112876314969f4f3eaec38d82152f1ae78a6d1ff3\"" Nov 1 00:53:30.677379 env[1305]: time="2025-11-01T00:53:30.677348679Z" level=info msg="StartContainer for \"9d9ee1858d2edeccdc696c7112876314969f4f3eaec38d82152f1ae78a6d1ff3\"" Nov 1 00:53:30.771130 env[1305]: time="2025-11-01T00:53:30.771011872Z" level=info msg="StartContainer for \"6c1b6dc43ba24eadd4951d3795452fc910eaf4bbb19983f0c657d621db62b23e\" returns successfully" Nov 1 00:53:30.791199 env[1305]: time="2025-11-01T00:53:30.791149950Z" level=info msg="StartContainer for \"c2d2696a67e47173d6b803232ef154f5f637eeb5800bde228ee04a8136d65551\" returns successfully" Nov 1 00:53:30.808380 kubelet[1750]: W1101 00:53:30.808187 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.212.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:30.808380 kubelet[1750]: E1101 00:53:30.808264 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://144.126.212.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:30.827925 env[1305]: time="2025-11-01T00:53:30.827876432Z" level=info msg="StartContainer for \"9d9ee1858d2edeccdc696c7112876314969f4f3eaec38d82152f1ae78a6d1ff3\" returns successfully" Nov 1 00:53:30.852653 kubelet[1750]: E1101 00:53:30.852593 1750 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.212.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-0efaf8214b?timeout=10s\": dial tcp 144.126.212.254:6443: connect: connection refused" interval="1.6s" Nov 1 00:53:30.924818 kubelet[1750]: W1101 00:53:30.924695 1750 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.212.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.212.254:6443: connect: connection refused Nov 1 00:53:30.924818 kubelet[1750]: E1101 00:53:30.924783 1750 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://144.126.212.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.212.254:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:53:31.019155 kubelet[1750]: I1101 00:53:31.018705 1750 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:31.019155 kubelet[1750]: E1101 00:53:31.019101 1750 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.212.254:6443/api/v1/nodes\": dial tcp 144.126.212.254:6443: connect: connection refused" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:31.503007 kubelet[1750]: E1101 00:53:31.502975 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:31.503737 kubelet[1750]: E1101 00:53:31.503718 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:31.508221 kubelet[1750]: E1101 00:53:31.508196 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:31.508473 kubelet[1750]: E1101 00:53:31.508458 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:31.509935 kubelet[1750]: E1101 00:53:31.509917 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:31.510187 kubelet[1750]: E1101 00:53:31.510172 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:32.511646 kubelet[1750]: E1101 00:53:32.511613 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:32.512247 kubelet[1750]: E1101 00:53:32.512227 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:32.512638 kubelet[1750]: E1101 00:53:32.512619 1750 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:32.512848 kubelet[1750]: E1101 00:53:32.512832 1750 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:32.620657 kubelet[1750]: I1101 00:53:32.620629 1750 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:32.644683 kubelet[1750]: E1101 00:53:32.644650 1750 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-0efaf8214b\" not found" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:32.733295 kubelet[1750]: I1101 00:53:32.733261 1750 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:32.733804 kubelet[1750]: E1101 00:53:32.733782 1750 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-0efaf8214b\": node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:32.742844 kubelet[1750]: E1101 00:53:32.742820 1750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:32.844051 kubelet[1750]: E1101 00:53:32.843937 1750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:32.945368 kubelet[1750]: E1101 00:53:32.945322 1750 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-0efaf8214b\" not found" Nov 1 00:53:33.048615 kubelet[1750]: I1101 00:53:33.048570 1750 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.054489 kubelet[1750]: E1101 00:53:33.054458 1750 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.054648 kubelet[1750]: I1101 00:53:33.054634 1750 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.056552 kubelet[1750]: E1101 00:53:33.056524 1750 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.056645 kubelet[1750]: I1101 00:53:33.056549 1750 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.058100 kubelet[1750]: E1101 00:53:33.058074 1750 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-0efaf8214b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:33.377461 kubelet[1750]: I1101 00:53:33.377420 1750 apiserver.go:52] "Watching apiserver" Nov 1 00:53:33.449247 kubelet[1750]: I1101 00:53:33.449191 1750 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:53:34.969200 systemd[1]: Reloading. Nov 1 00:53:35.044781 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-11-01T00:53:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:53:35.044816 /usr/lib/systemd/system-generators/torcx-generator[2030]: time="2025-11-01T00:53:35Z" level=info msg="torcx already run" Nov 1 00:53:35.147221 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:53:35.147242 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:53:35.169009 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:53:35.255401 kubelet[1750]: I1101 00:53:35.255169 1750 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:53:35.255867 systemd[1]: Stopping kubelet.service... Nov 1 00:53:35.279364 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:53:35.279836 systemd[1]: Stopped kubelet.service. Nov 1 00:53:35.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:35.281696 kernel: kauditd_printk_skb: 43 callbacks suppressed Nov 1 00:53:35.281792 kernel: audit: type=1131 audit(1761958415.279:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:35.282995 systemd[1]: Starting kubelet.service... Nov 1 00:53:35.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.254:22-157.230.249.150:62443 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:35.608125 systemd[1]: Started sshd@7-144.126.212.254:22-157.230.249.150:62443.service. Nov 1 00:53:35.615842 kernel: audit: type=1130 audit(1761958415.607:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.254:22-157.230.249.150:62443 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:35.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.254:22-157.230.249.150:62443 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:35.632958 sshd[2088]: kex_exchange_identification: banner line contains invalid characters Nov 1 00:53:35.632958 sshd[2088]: banner exchange: Connection from 157.230.249.150 port 62443: invalid format Nov 1 00:53:35.630487 systemd[1]: sshd@7-144.126.212.254:22-157.230.249.150:62443.service: Deactivated successfully. Nov 1 00:53:35.638794 kernel: audit: type=1131 audit(1761958415.630:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-144.126.212.254:22-157.230.249.150:62443 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:36.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:36.214677 systemd[1]: Started kubelet.service. Nov 1 00:53:36.221917 kernel: audit: type=1130 audit(1761958416.214:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:36.292789 kubelet[2095]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:53:36.292789 kubelet[2095]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:53:36.292789 kubelet[2095]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:53:36.294484 kubelet[2095]: I1101 00:53:36.294446 2095 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:53:36.312922 kubelet[2095]: I1101 00:53:36.312860 2095 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:53:36.312922 kubelet[2095]: I1101 00:53:36.312931 2095 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:53:36.314590 kubelet[2095]: I1101 00:53:36.313229 2095 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:53:36.314590 kubelet[2095]: I1101 00:53:36.314570 2095 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:53:36.332170 kubelet[2095]: I1101 00:53:36.332036 2095 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:53:36.337082 kubelet[2095]: E1101 00:53:36.337041 2095 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:53:36.337400 kubelet[2095]: I1101 00:53:36.337380 2095 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:53:36.341422 kubelet[2095]: I1101 00:53:36.341393 2095 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:53:36.342379 kubelet[2095]: I1101 00:53:36.342318 2095 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:53:36.353284 kubelet[2095]: I1101 00:53:36.342506 2095 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-0efaf8214b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:53:36.353522 kubelet[2095]: I1101 00:53:36.353296 2095 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:53:36.353522 kubelet[2095]: I1101 00:53:36.353312 2095 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:53:36.353522 kubelet[2095]: I1101 00:53:36.353372 2095 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:53:36.353660 kubelet[2095]: I1101 00:53:36.353528 2095 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:53:36.353660 kubelet[2095]: I1101 00:53:36.353548 2095 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:53:36.353660 kubelet[2095]: I1101 00:53:36.353566 2095 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:53:36.353660 kubelet[2095]: I1101 00:53:36.353577 2095 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:53:36.354710 kubelet[2095]: I1101 00:53:36.354681 2095 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:53:36.355493 kubelet[2095]: I1101 00:53:36.355470 2095 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:53:36.356252 kubelet[2095]: I1101 00:53:36.356224 2095 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:53:36.356405 kubelet[2095]: I1101 00:53:36.356389 2095 server.go:1287] "Started kubelet" Nov 1 00:53:36.374509 kubelet[2095]: I1101 00:53:36.363210 2095 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:53:36.381770 kubelet[2095]: I1101 00:53:36.381680 2095 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:53:36.382076 kubelet[2095]: I1101 00:53:36.382049 2095 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:53:36.404000 audit[2095]: AVC avc: denied { mac_admin } for pid=2095 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:36.412305 kubelet[2095]: I1101 00:53:36.405474 2095 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:53:36.412305 kubelet[2095]: I1101 00:53:36.405578 2095 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:53:36.412305 kubelet[2095]: I1101 00:53:36.405616 2095 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:53:36.404000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:36.415914 kernel: audit: type=1400 audit(1761958416.404:244): avc: denied { mac_admin } for pid=2095 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:36.415974 kernel: audit: type=1401 audit(1761958416.404:244): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:36.404000 audit[2095]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009287e0 a1=c000a51710 a2=c0009287b0 a3=25 items=0 ppid=1 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:36.416446 kubelet[2095]: I1101 00:53:36.416423 2095 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:53:36.419467 kubelet[2095]: I1101 00:53:36.419444 2095 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:53:36.425189 kernel: audit: type=1300 audit(1761958416.404:244): arch=c000003e syscall=188 success=no exit=-22 a0=c0009287e0 a1=c000a51710 a2=c0009287b0 a3=25 items=0 ppid=1 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:36.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:36.428031 kubelet[2095]: I1101 00:53:36.428008 2095 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:53:36.428255 kubelet[2095]: I1101 00:53:36.428240 2095 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:53:36.431239 kubelet[2095]: I1101 00:53:36.431217 2095 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:53:36.433154 kubelet[2095]: I1101 00:53:36.433132 2095 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:53:36.433351 kubelet[2095]: I1101 00:53:36.433331 2095 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:53:36.434839 kernel: audit: type=1327 audit(1761958416.404:244): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:36.436588 kubelet[2095]: I1101 00:53:36.436571 2095 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:53:36.404000 audit[2095]: AVC avc: denied { mac_admin } for pid=2095 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:36.404000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:36.448024 kernel: audit: type=1400 audit(1761958416.404:245): avc: denied { mac_admin } for pid=2095 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:36.448099 kernel: audit: type=1401 audit(1761958416.404:245): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:36.451932 kubelet[2095]: E1101 00:53:36.451909 2095 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:53:36.404000 audit[2095]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000942200 a1=c000a51728 a2=c000928870 a3=25 items=0 ppid=1 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:36.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:36.464095 kubelet[2095]: I1101 00:53:36.463622 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:53:36.465849 kubelet[2095]: I1101 00:53:36.464997 2095 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:53:36.465849 kubelet[2095]: I1101 00:53:36.465038 2095 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:53:36.465849 kubelet[2095]: I1101 00:53:36.465064 2095 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:53:36.465849 kubelet[2095]: I1101 00:53:36.465073 2095 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:53:36.465849 kubelet[2095]: E1101 00:53:36.465128 2095 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.555166 2095 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.555336 2095 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.555356 2095 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.555948 2095 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.555966 2095 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.556019 2095 policy_none.go:49] "None policy: Start" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.556188 2095 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:53:36.556598 kubelet[2095]: I1101 00:53:36.556215 2095 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:53:36.560544 kubelet[2095]: I1101 00:53:36.556622 2095 state_mem.go:75] "Updated machine memory state" Nov 1 00:53:36.565454 kubelet[2095]: E1101 00:53:36.565400 2095 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:53:36.570000 audit[2095]: AVC avc: denied { mac_admin } for pid=2095 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:53:36.570000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:53:36.570000 audit[2095]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001169f20 a1=c00116e498 a2=c001169ef0 a3=25 items=0 ppid=1 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:36.570000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:53:36.572317 kubelet[2095]: I1101 00:53:36.571006 2095 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:53:36.572317 kubelet[2095]: I1101 00:53:36.571116 2095 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:53:36.572317 kubelet[2095]: I1101 00:53:36.571345 2095 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:53:36.572317 kubelet[2095]: I1101 00:53:36.571365 2095 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:53:36.573208 kubelet[2095]: I1101 00:53:36.573185 2095 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:53:36.577283 kubelet[2095]: E1101 00:53:36.576287 2095 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:53:36.689710 kubelet[2095]: I1101 00:53:36.688938 2095 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.698725 kubelet[2095]: I1101 00:53:36.698251 2095 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.698725 kubelet[2095]: I1101 00:53:36.698350 2095 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.766335 kubelet[2095]: I1101 00:53:36.766295 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.766620 kubelet[2095]: I1101 00:53:36.766596 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.766832 kubelet[2095]: I1101 00:53:36.766365 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.775551 kubelet[2095]: W1101 00:53:36.775522 2095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:53:36.779825 kubelet[2095]: W1101 00:53:36.779799 2095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:53:36.780227 kubelet[2095]: W1101 00:53:36.780204 2095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:53:36.839823 kubelet[2095]: I1101 00:53:36.839697 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840025 kubelet[2095]: I1101 00:53:36.840006 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840153 kubelet[2095]: I1101 00:53:36.840132 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840263 kubelet[2095]: I1101 00:53:36.840247 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840382 kubelet[2095]: I1101 00:53:36.840367 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840494 kubelet[2095]: I1101 00:53:36.840478 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5206b48dfbc668a622443f679b5b5d1-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-0efaf8214b\" (UID: \"d5206b48dfbc668a622443f679b5b5d1\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840599 kubelet[2095]: I1101 00:53:36.840585 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ca7c5f932118657f12430d70fb881c8-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" (UID: \"9ca7c5f932118657f12430d70fb881c8\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840711 kubelet[2095]: I1101 00:53:36.840696 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:36.840846 kubelet[2095]: I1101 00:53:36.840830 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a1cf501193079699a0d2938452a031d9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-0efaf8214b\" (UID: \"a1cf501193079699a0d2938452a031d9\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:37.077851 kubelet[2095]: E1101 00:53:37.077807 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.080946 kubelet[2095]: E1101 00:53:37.080915 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.081188 kubelet[2095]: E1101 00:53:37.081167 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.281946 systemd[1]: Started sshd@8-144.126.212.254:22-134.209.158.3:49594.service. Nov 1 00:53:37.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-144.126.212.254:22-134.209.158.3:49594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:37.296188 sshd[2132]: kex_exchange_identification: banner line contains invalid characters Nov 1 00:53:37.296616 sshd[2132]: banner exchange: Connection from 134.209.158.3 port 49594: invalid format Nov 1 00:53:37.298044 systemd[1]: sshd@8-144.126.212.254:22-134.209.158.3:49594.service: Deactivated successfully. Nov 1 00:53:37.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-144.126.212.254:22-134.209.158.3:49594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:37.362342 kubelet[2095]: I1101 00:53:37.362280 2095 apiserver.go:52] "Watching apiserver" Nov 1 00:53:37.428295 kubelet[2095]: I1101 00:53:37.428214 2095 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:53:37.501979 kubelet[2095]: E1101 00:53:37.500738 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.501979 kubelet[2095]: I1101 00:53:37.500946 2095 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:37.501979 kubelet[2095]: E1101 00:53:37.501396 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.507422 kubelet[2095]: W1101 00:53:37.507400 2095 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:53:37.507684 kubelet[2095]: E1101 00:53:37.507636 2095 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-0efaf8214b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" Nov 1 00:53:37.507949 kubelet[2095]: E1101 00:53:37.507935 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:37.528902 kubelet[2095]: I1101 00:53:37.528780 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-0efaf8214b" podStartSLOduration=1.528759545 podStartE2EDuration="1.528759545s" podCreationTimestamp="2025-11-01 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:53:37.527731413 +0000 UTC m=+1.285827010" watchObservedRunningTime="2025-11-01 00:53:37.528759545 +0000 UTC m=+1.286855139" Nov 1 00:53:37.547257 kubelet[2095]: I1101 00:53:37.547075 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-0efaf8214b" podStartSLOduration=1.54705742 podStartE2EDuration="1.54705742s" podCreationTimestamp="2025-11-01 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:53:37.539504582 +0000 UTC m=+1.297600187" watchObservedRunningTime="2025-11-01 00:53:37.54705742 +0000 UTC m=+1.305153007" Nov 1 00:53:37.547257 kubelet[2095]: I1101 00:53:37.547171 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-0efaf8214b" podStartSLOduration=1.54716535 podStartE2EDuration="1.54716535s" podCreationTimestamp="2025-11-01 00:53:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:53:37.546761585 +0000 UTC m=+1.304857178" watchObservedRunningTime="2025-11-01 00:53:37.54716535 +0000 UTC m=+1.305260953" Nov 1 00:53:38.502257 kubelet[2095]: E1101 00:53:38.502223 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:38.503042 kubelet[2095]: E1101 00:53:38.503018 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:39.483080 kubelet[2095]: E1101 00:53:39.483049 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:39.503317 kubelet[2095]: E1101 00:53:39.503275 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:41.207488 kubelet[2095]: I1101 00:53:41.207439 2095 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:53:41.208321 env[1305]: time="2025-11-01T00:53:41.208272260Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:53:41.209157 kubelet[2095]: I1101 00:53:41.208857 2095 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:53:41.971743 kubelet[2095]: I1101 00:53:41.971692 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dafc0d6-e766-4736-94a1-0157a8f397eb-xtables-lock\") pod \"kube-proxy-8pvp6\" (UID: \"4dafc0d6-e766-4736-94a1-0157a8f397eb\") " pod="kube-system/kube-proxy-8pvp6" Nov 1 00:53:41.972010 kubelet[2095]: I1101 00:53:41.971973 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx62s\" (UniqueName: \"kubernetes.io/projected/4dafc0d6-e766-4736-94a1-0157a8f397eb-kube-api-access-zx62s\") pod \"kube-proxy-8pvp6\" (UID: \"4dafc0d6-e766-4736-94a1-0157a8f397eb\") " pod="kube-system/kube-proxy-8pvp6" Nov 1 00:53:41.972129 kubelet[2095]: I1101 00:53:41.972112 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4dafc0d6-e766-4736-94a1-0157a8f397eb-kube-proxy\") pod \"kube-proxy-8pvp6\" (UID: \"4dafc0d6-e766-4736-94a1-0157a8f397eb\") " pod="kube-system/kube-proxy-8pvp6" Nov 1 00:53:41.972249 kubelet[2095]: I1101 00:53:41.972232 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dafc0d6-e766-4736-94a1-0157a8f397eb-lib-modules\") pod \"kube-proxy-8pvp6\" (UID: \"4dafc0d6-e766-4736-94a1-0157a8f397eb\") " pod="kube-system/kube-proxy-8pvp6" Nov 1 00:53:42.081655 kubelet[2095]: E1101 00:53:42.081606 2095 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:53:42.081898 kubelet[2095]: E1101 00:53:42.081884 2095 projected.go:194] Error preparing data for projected volume kube-api-access-zx62s for pod kube-system/kube-proxy-8pvp6: configmap "kube-root-ca.crt" not found Nov 1 00:53:42.082078 kubelet[2095]: E1101 00:53:42.082065 2095 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4dafc0d6-e766-4736-94a1-0157a8f397eb-kube-api-access-zx62s podName:4dafc0d6-e766-4736-94a1-0157a8f397eb nodeName:}" failed. No retries permitted until 2025-11-01 00:53:42.582019874 +0000 UTC m=+6.340115473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zx62s" (UniqueName: "kubernetes.io/projected/4dafc0d6-e766-4736-94a1-0157a8f397eb-kube-api-access-zx62s") pod "kube-proxy-8pvp6" (UID: "4dafc0d6-e766-4736-94a1-0157a8f397eb") : configmap "kube-root-ca.crt" not found Nov 1 00:53:42.312173 kubelet[2095]: W1101 00:53:42.312065 2095 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-3510.3.8-n-0efaf8214b" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3510.3.8-n-0efaf8214b' and this object Nov 1 00:53:42.312792 kubelet[2095]: I1101 00:53:42.312065 2095 status_manager.go:890] "Failed to get status for pod" podUID="3069f27d-dfad-4bb7-ad66-a17ba708100b" pod="tigera-operator/tigera-operator-7dcd859c48-srkcd" err="pods \"tigera-operator-7dcd859c48-srkcd\" is forbidden: User \"system:node:ci-3510.3.8-n-0efaf8214b\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-3510.3.8-n-0efaf8214b' and this object" Nov 1 00:53:42.312868 kubelet[2095]: W1101 00:53:42.312632 2095 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.8-n-0efaf8214b" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-3510.3.8-n-0efaf8214b' and this object Nov 1 00:53:42.312868 kubelet[2095]: E1101 00:53:42.312845 2095 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510.3.8-n-0efaf8214b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-3510.3.8-n-0efaf8214b' and this object" logger="UnhandledError" Nov 1 00:53:42.313074 kubelet[2095]: E1101 00:53:42.312781 2095 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-3510.3.8-n-0efaf8214b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-3510.3.8-n-0efaf8214b' and this object" logger="UnhandledError" Nov 1 00:53:42.374942 kubelet[2095]: I1101 00:53:42.374829 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgckk\" (UniqueName: \"kubernetes.io/projected/3069f27d-dfad-4bb7-ad66-a17ba708100b-kube-api-access-jgckk\") pod \"tigera-operator-7dcd859c48-srkcd\" (UID: \"3069f27d-dfad-4bb7-ad66-a17ba708100b\") " pod="tigera-operator/tigera-operator-7dcd859c48-srkcd" Nov 1 00:53:42.374942 kubelet[2095]: I1101 00:53:42.374912 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3069f27d-dfad-4bb7-ad66-a17ba708100b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-srkcd\" (UID: \"3069f27d-dfad-4bb7-ad66-a17ba708100b\") " pod="tigera-operator/tigera-operator-7dcd859c48-srkcd" Nov 1 00:53:42.677481 kubelet[2095]: I1101 00:53:42.677446 2095 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:53:42.840092 kubelet[2095]: E1101 00:53:42.839990 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:42.841563 env[1305]: time="2025-11-01T00:53:42.840941215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pvp6,Uid:4dafc0d6-e766-4736-94a1-0157a8f397eb,Namespace:kube-system,Attempt:0,}" Nov 1 00:53:42.860814 env[1305]: time="2025-11-01T00:53:42.860728815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:42.861034 env[1305]: time="2025-11-01T00:53:42.861005183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:42.861159 env[1305]: time="2025-11-01T00:53:42.861134481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:42.861475 env[1305]: time="2025-11-01T00:53:42.861439300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/090ee2747ced4c4583cf93460d3214909d4bc3c00e6ec46b973023f79f51205e pid=2149 runtime=io.containerd.runc.v2 Nov 1 00:53:42.911965 env[1305]: time="2025-11-01T00:53:42.911910819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8pvp6,Uid:4dafc0d6-e766-4736-94a1-0157a8f397eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"090ee2747ced4c4583cf93460d3214909d4bc3c00e6ec46b973023f79f51205e\"" Nov 1 00:53:42.913074 kubelet[2095]: E1101 00:53:42.913037 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:42.915733 env[1305]: time="2025-11-01T00:53:42.915699453Z" level=info msg="CreateContainer within sandbox \"090ee2747ced4c4583cf93460d3214909d4bc3c00e6ec46b973023f79f51205e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:53:42.933670 env[1305]: time="2025-11-01T00:53:42.933139758Z" level=info msg="CreateContainer within sandbox \"090ee2747ced4c4583cf93460d3214909d4bc3c00e6ec46b973023f79f51205e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52801d754db1bdf6658f97787fc57d92595e67cd0c7018f7cef1187307133c3d\"" Nov 1 00:53:42.935982 env[1305]: time="2025-11-01T00:53:42.934486872Z" level=info msg="StartContainer for \"52801d754db1bdf6658f97787fc57d92595e67cd0c7018f7cef1187307133c3d\"" Nov 1 00:53:43.015574 env[1305]: time="2025-11-01T00:53:43.015523335Z" level=info msg="StartContainer for \"52801d754db1bdf6658f97787fc57d92595e67cd0c7018f7cef1187307133c3d\" returns successfully" Nov 1 00:53:43.156986 kernel: kauditd_printk_skb: 8 callbacks suppressed Nov 1 00:53:43.157145 kernel: audit: type=1325 audit(1761958423.149:249): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.149000 audit[2249]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.149000 audit[2249]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd718714a0 a2=0 a3=7ffd7187148c items=0 ppid=2198 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.169396 kernel: audit: type=1300 audit(1761958423.149:249): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd718714a0 a2=0 a3=7ffd7187148c items=0 ppid=2198 pid=2249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:53:43.149000 audit[2250]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=2250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.183140 kernel: audit: type=1327 audit(1761958423.149:249): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:53:43.183276 kernel: audit: type=1325 audit(1761958423.149:250): table=nat:39 family=10 entries=1 op=nft_register_chain pid=2250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.183337 kernel: audit: type=1300 audit(1761958423.149:250): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd44c12b00 a2=0 a3=7ffd44c12aec items=0 ppid=2198 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.149000 audit[2250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd44c12b00 a2=0 a3=7ffd44c12aec items=0 ppid=2198 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:53:43.196397 kernel: audit: type=1327 audit(1761958423.149:250): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:53:43.151000 audit[2251]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.151000 audit[2251]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe955ef3c0 a2=0 a3=7ffe955ef3ac items=0 ppid=2198 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.210546 kernel: audit: type=1325 audit(1761958423.151:251): table=filter:40 family=10 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.210688 kernel: audit: type=1300 audit(1761958423.151:251): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe955ef3c0 a2=0 a3=7ffe955ef3ac items=0 ppid=2198 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:53:43.212900 env[1305]: time="2025-11-01T00:53:43.212351227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-srkcd,Uid:3069f27d-dfad-4bb7-ad66-a17ba708100b,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:53:43.151000 audit[2252]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.220904 kernel: audit: type=1327 audit(1761958423.151:251): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:53:43.221001 kernel: audit: type=1325 audit(1761958423.151:252): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.151000 audit[2252]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffffe3014f0 a2=0 a3=7ffffe3014dc items=0 ppid=2198 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:53:43.151000 audit[2253]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.151000 audit[2253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9d6ded70 a2=0 a3=7ffc9d6ded5c items=0 ppid=2198 pid=2253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:53:43.156000 audit[2254]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.156000 audit[2254]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd45bd2070 a2=0 a3=7ffd45bd205c items=0 ppid=2198 pid=2254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:53:43.236173 env[1305]: time="2025-11-01T00:53:43.235971693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:43.236173 env[1305]: time="2025-11-01T00:53:43.236014881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:43.236173 env[1305]: time="2025-11-01T00:53:43.236026677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:43.237077 env[1305]: time="2025-11-01T00:53:43.236220663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4162c5cdd6ded985cfe4976265d50cac967dc690a90fe82619e4f8585e8a8afc pid=2264 runtime=io.containerd.runc.v2 Nov 1 00:53:43.262000 audit[2285]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.262000 audit[2285]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffce07bf750 a2=0 a3=7ffce07bf73c items=0 ppid=2198 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:53:43.275000 audit[2292]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2292 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.275000 audit[2292]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc8f6c7070 a2=0 a3=7ffc8f6c705c items=0 ppid=2198 pid=2292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.275000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:53:43.285000 audit[2295]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.285000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeed2ee7a0 a2=0 a3=7ffeed2ee78c items=0 ppid=2198 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:53:43.287000 audit[2296]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2296 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.287000 audit[2296]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb70950c0 a2=0 a3=7ffcb70950ac items=0 ppid=2198 pid=2296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:53:43.290000 audit[2298]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2298 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.290000 audit[2298]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc81ef7e90 a2=0 a3=7ffc81ef7e7c items=0 ppid=2198 pid=2298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:53:43.291000 audit[2299]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.291000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdd25175b0 a2=0 a3=7ffdd251759c items=0 ppid=2198 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:53:43.297000 audit[2301]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2301 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.297000 audit[2301]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc76867eb0 a2=0 a3=7ffc76867e9c items=0 ppid=2198 pid=2301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:53:43.319063 env[1305]: time="2025-11-01T00:53:43.319018563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-srkcd,Uid:3069f27d-dfad-4bb7-ad66-a17ba708100b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4162c5cdd6ded985cfe4976265d50cac967dc690a90fe82619e4f8585e8a8afc\"" Nov 1 00:53:43.323055 env[1305]: time="2025-11-01T00:53:43.323019969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:53:43.326000 audit[2309]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.326000 audit[2309]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0e573ae0 a2=0 a3=7ffd0e573acc items=0 ppid=2198 pid=2309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:53:43.328000 audit[2315]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.328000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff7e90840 a2=0 a3=7ffff7e9082c items=0 ppid=2198 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:53:43.332000 audit[2317]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.332000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeaaea9660 a2=0 a3=7ffeaaea964c items=0 ppid=2198 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:53:43.333000 audit[2318]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.333000 audit[2318]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffde0a6fe60 a2=0 a3=7ffde0a6fe4c items=0 ppid=2198 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:53:43.336000 audit[2320]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.336000 audit[2320]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee6b0cbc0 a2=0 a3=7ffee6b0cbac items=0 ppid=2198 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.336000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:53:43.341000 audit[2323]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.341000 audit[2323]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc45338240 a2=0 a3=7ffc4533822c items=0 ppid=2198 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:53:43.345000 audit[2326]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.345000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde0d04940 a2=0 a3=7ffde0d0492c items=0 ppid=2198 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:53:43.347000 audit[2327]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.347000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd9ce13670 a2=0 a3=7ffd9ce1365c items=0 ppid=2198 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.347000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:53:43.351000 audit[2329]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.351000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6b942180 a2=0 a3=7ffc6b94216c items=0 ppid=2198 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:53:43.356000 audit[2332]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.356000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffd1cc2f0 a2=0 a3=7ffffd1cc2dc items=0 ppid=2198 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:53:43.357000 audit[2333]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.357000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc683d80b0 a2=0 a3=7ffc683d809c items=0 ppid=2198 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:53:43.360000 audit[2335]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:53:43.360000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff6f3202f0 a2=0 a3=7fff6f3202dc items=0 ppid=2198 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:53:43.392000 audit[2341]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:43.392000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe67dca4a0 a2=0 a3=7ffe67dca48c items=0 ppid=2198 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:43.402000 audit[2341]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:43.402000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe67dca4a0 a2=0 a3=7ffe67dca48c items=0 ppid=2198 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:43.404000 audit[2346]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2346 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.404000 audit[2346]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdbe5959b0 a2=0 a3=7ffdbe59599c items=0 ppid=2198 pid=2346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:53:43.407000 audit[2348]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.407000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffde35a8780 a2=0 a3=7ffde35a876c items=0 ppid=2198 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:53:43.413000 audit[2351]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.413000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc46dd41f0 a2=0 a3=7ffc46dd41dc items=0 ppid=2198 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:53:43.414000 audit[2352]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.414000 audit[2352]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0af56a80 a2=0 a3=7ffd0af56a6c items=0 ppid=2198 pid=2352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:53:43.418000 audit[2354]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2354 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.418000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff75c7c740 a2=0 a3=7fff75c7c72c items=0 ppid=2198 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:53:43.420000 audit[2355]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.420000 audit[2355]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe58541720 a2=0 a3=7ffe5854170c items=0 ppid=2198 pid=2355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:53:43.422000 audit[2357]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2357 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.422000 audit[2357]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc16f18120 a2=0 a3=7ffc16f1810c items=0 ppid=2198 pid=2357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:53:43.427000 audit[2360]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.427000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffda30d08f0 a2=0 a3=7ffda30d08dc items=0 ppid=2198 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.427000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:53:43.428000 audit[2361]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.428000 audit[2361]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8e3c79e0 a2=0 a3=7ffd8e3c79cc items=0 ppid=2198 pid=2361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.428000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:53:43.431000 audit[2363]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2363 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.431000 audit[2363]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe292fee90 a2=0 a3=7ffe292fee7c items=0 ppid=2198 pid=2363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:53:43.432000 audit[2364]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.432000 audit[2364]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd75b8c110 a2=0 a3=7ffd75b8c0fc items=0 ppid=2198 pid=2364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:53:43.435000 audit[2366]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2366 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.435000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcbdc1550 a2=0 a3=7fffcbdc153c items=0 ppid=2198 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.435000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:53:43.439000 audit[2369]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2369 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.439000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9aa3a440 a2=0 a3=7ffc9aa3a42c items=0 ppid=2198 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:53:43.444000 audit[2372]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2372 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.444000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4b854040 a2=0 a3=7ffc4b85402c items=0 ppid=2198 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:53:43.446000 audit[2373]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.446000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd87b2470 a2=0 a3=7fffd87b245c items=0 ppid=2198 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.446000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:53:43.452000 audit[2375]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2375 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.452000 audit[2375]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc1f8281b0 a2=0 a3=7ffc1f82819c items=0 ppid=2198 pid=2375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:53:43.458000 audit[2378]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2378 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.458000 audit[2378]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe1127a640 a2=0 a3=7ffe1127a62c items=0 ppid=2198 pid=2378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.458000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:53:43.460000 audit[2379]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.460000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd760ce850 a2=0 a3=7ffd760ce83c items=0 ppid=2198 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:53:43.463000 audit[2381]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.463000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd2e40c460 a2=0 a3=7ffd2e40c44c items=0 ppid=2198 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:53:43.464000 audit[2382]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.464000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffed330a10 a2=0 a3=7fffed3309fc items=0 ppid=2198 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:53:43.468000 audit[2384]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.468000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff47d72e10 a2=0 a3=7fff47d72dfc items=0 ppid=2198 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:53:43.476000 audit[2387]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:53:43.476000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2b8fcfd0 a2=0 a3=7ffd2b8fcfbc items=0 ppid=2198 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:53:43.481000 audit[2389]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:53:43.481000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff9bd52860 a2=0 a3=7fff9bd5284c items=0 ppid=2198 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.481000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:43.481000 audit[2389]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:53:43.481000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff9bd52860 a2=0 a3=7fff9bd5284c items=0 ppid=2198 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:43.481000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:43.514307 kubelet[2095]: E1101 00:53:43.514207 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:43.536389 kubelet[2095]: I1101 00:53:43.536337 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8pvp6" podStartSLOduration=2.536317597 podStartE2EDuration="2.536317597s" podCreationTimestamp="2025-11-01 00:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:53:43.526230972 +0000 UTC m=+7.284326579" watchObservedRunningTime="2025-11-01 00:53:43.536317597 +0000 UTC m=+7.294413193" Nov 1 00:53:43.685428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722044259.mount: Deactivated successfully. Nov 1 00:53:44.718512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2494249145.mount: Deactivated successfully. Nov 1 00:53:45.627273 kubelet[2095]: E1101 00:53:45.626920 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:45.715253 env[1305]: time="2025-11-01T00:53:45.715188348Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:45.716682 env[1305]: time="2025-11-01T00:53:45.716639114Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:45.717999 env[1305]: time="2025-11-01T00:53:45.717968900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:45.719243 env[1305]: time="2025-11-01T00:53:45.719216000Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:53:45.719889 env[1305]: time="2025-11-01T00:53:45.719862733Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:53:45.724136 env[1305]: time="2025-11-01T00:53:45.724098480Z" level=info msg="CreateContainer within sandbox \"4162c5cdd6ded985cfe4976265d50cac967dc690a90fe82619e4f8585e8a8afc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:53:45.739345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470025483.mount: Deactivated successfully. Nov 1 00:53:45.744015 env[1305]: time="2025-11-01T00:53:45.743976419Z" level=info msg="CreateContainer within sandbox \"4162c5cdd6ded985cfe4976265d50cac967dc690a90fe82619e4f8585e8a8afc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0f0c7f13b305893daa403df2b68e3c66eb3fd2778fe6b49a7eb75119f8bb925\"" Nov 1 00:53:45.744996 env[1305]: time="2025-11-01T00:53:45.744959757Z" level=info msg="StartContainer for \"f0f0c7f13b305893daa403df2b68e3c66eb3fd2778fe6b49a7eb75119f8bb925\"" Nov 1 00:53:45.807814 env[1305]: time="2025-11-01T00:53:45.807736790Z" level=info msg="StartContainer for \"f0f0c7f13b305893daa403df2b68e3c66eb3fd2778fe6b49a7eb75119f8bb925\" returns successfully" Nov 1 00:53:46.520205 kubelet[2095]: E1101 00:53:46.520166 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:49.291208 kubelet[2095]: E1101 00:53:49.291162 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:49.304175 kubelet[2095]: I1101 00:53:49.304125 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-srkcd" podStartSLOduration=4.902700562 podStartE2EDuration="7.303367041s" podCreationTimestamp="2025-11-01 00:53:42 +0000 UTC" firstStartedPulling="2025-11-01 00:53:43.320474317 +0000 UTC m=+7.078569911" lastFinishedPulling="2025-11-01 00:53:45.7211408 +0000 UTC m=+9.479236390" observedRunningTime="2025-11-01 00:53:46.543284607 +0000 UTC m=+10.301380217" watchObservedRunningTime="2025-11-01 00:53:49.303367041 +0000 UTC m=+13.061462634" Nov 1 00:53:49.500742 kubelet[2095]: E1101 00:53:49.500699 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:49.526115 kubelet[2095]: E1101 00:53:49.526073 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:50.791800 update_engine[1292]: I1101 00:53:50.791524 1292 update_attempter.cc:509] Updating boot flags... Nov 1 00:53:52.277431 sudo[1470]: pam_unix(sudo:session): session closed for user root Nov 1 00:53:52.283788 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 00:53:52.283905 kernel: audit: type=1106 audit(1761958432.276:300): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:52.276000 audit[1470]: USER_END pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:52.276000 audit[1470]: CRED_DISP pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:52.296120 sshd[1464]: pam_unix(sshd:session): session closed for user core Nov 1 00:53:52.296773 kernel: audit: type=1104 audit(1761958432.276:301): pid=1470 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:53:52.297000 audit[1464]: USER_END pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:52.299661 systemd[1]: sshd@6-144.126.212.254:22-139.178.89.65:44546.service: Deactivated successfully. Nov 1 00:53:52.300519 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:53:52.309836 kernel: audit: type=1106 audit(1761958432.297:302): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:52.309965 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:53:52.312173 systemd-logind[1290]: Removed session 7. Nov 1 00:53:52.297000 audit[1464]: CRED_DISP pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:52.326779 kernel: audit: type=1104 audit(1761958432.297:303): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:53:52.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.254:22-139.178.89.65:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:52.334772 kernel: audit: type=1131 audit(1761958432.299:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-144.126.212.254:22-139.178.89.65:44546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:53:53.559000 audit[2492]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.565777 kernel: audit: type=1325 audit(1761958433.559:305): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.559000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf9b9c110 a2=0 a3=7ffcf9b9c0fc items=0 ppid=2198 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.576804 kernel: audit: type=1300 audit(1761958433.559:305): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcf9b9c110 a2=0 a3=7ffcf9b9c0fc items=0 ppid=2198 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:53.588778 kernel: audit: type=1327 audit(1761958433.559:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:53.580000 audit[2492]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.596783 kernel: audit: type=1325 audit(1761958433.580:306): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.580000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf9b9c110 a2=0 a3=0 items=0 ppid=2198 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.613790 kernel: audit: type=1300 audit(1761958433.580:306): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf9b9c110 a2=0 a3=0 items=0 ppid=2198 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:53.643000 audit[2494]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.643000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed8798620 a2=0 a3=7ffed879860c items=0 ppid=2198 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.643000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:53.647000 audit[2494]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:53.647000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed8798620 a2=0 a3=0 items=0 ppid=2198 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:53.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:55.779000 audit[2496]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:55.779000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd778b0710 a2=0 a3=7ffd778b06fc items=0 ppid=2198 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:55.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:55.862000 audit[2496]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:55.862000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd778b0710 a2=0 a3=0 items=0 ppid=2198 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:55.862000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:55.894000 audit[2498]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:55.894000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe7075a1a0 a2=0 a3=7ffe7075a18c items=0 ppid=2198 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:55.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:55.898000 audit[2498]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:55.898000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe7075a1a0 a2=0 a3=0 items=0 ppid=2198 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:55.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.740000 audit[2500]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.743408 kernel: kauditd_printk_skb: 19 callbacks suppressed Nov 1 00:53:57.743478 kernel: audit: type=1325 audit(1761958437.740:313): table=filter:97 family=2 entries=21 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.740000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe3a108b90 a2=0 a3=7ffe3a108b7c items=0 ppid=2198 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.758824 kernel: audit: type=1300 audit(1761958437.740:313): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe3a108b90 a2=0 a3=7ffe3a108b7c items=0 ppid=2198 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.761000 audit[2500]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.771222 kernel: audit: type=1327 audit(1761958437.740:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.771300 kernel: audit: type=1325 audit(1761958437.761:314): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.761000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3a108b90 a2=0 a3=0 items=0 ppid=2198 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.792787 kernel: audit: type=1300 audit(1761958437.761:314): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe3a108b90 a2=0 a3=0 items=0 ppid=2198 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.812779 kernel: audit: type=1327 audit(1761958437.761:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.834000 audit[2502]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.834000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd2cebc070 a2=0 a3=7ffd2cebc05c items=0 ppid=2198 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.848776 kernel: audit: type=1325 audit(1761958437.834:315): table=filter:99 family=2 entries=22 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.848864 kernel: audit: type=1300 audit(1761958437.834:315): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd2cebc070 a2=0 a3=7ffd2cebc05c items=0 ppid=2198 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.853313 kernel: audit: type=1327 audit(1761958437.834:315): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.858000 audit[2502]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.866782 kernel: audit: type=1325 audit(1761958437.858:316): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:57.877141 kubelet[2095]: I1101 00:53:57.877082 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zz6d\" (UniqueName: \"kubernetes.io/projected/87b8d77d-1ebe-40e2-a047-51a5c4ed0c79-kube-api-access-4zz6d\") pod \"calico-typha-7cf7d48c8-vc5lg\" (UID: \"87b8d77d-1ebe-40e2-a047-51a5c4ed0c79\") " pod="calico-system/calico-typha-7cf7d48c8-vc5lg" Nov 1 00:53:57.877141 kubelet[2095]: I1101 00:53:57.877145 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87b8d77d-1ebe-40e2-a047-51a5c4ed0c79-tigera-ca-bundle\") pod \"calico-typha-7cf7d48c8-vc5lg\" (UID: \"87b8d77d-1ebe-40e2-a047-51a5c4ed0c79\") " pod="calico-system/calico-typha-7cf7d48c8-vc5lg" Nov 1 00:53:57.877533 kubelet[2095]: I1101 00:53:57.877164 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/87b8d77d-1ebe-40e2-a047-51a5c4ed0c79-typha-certs\") pod \"calico-typha-7cf7d48c8-vc5lg\" (UID: \"87b8d77d-1ebe-40e2-a047-51a5c4ed0c79\") " pod="calico-system/calico-typha-7cf7d48c8-vc5lg" Nov 1 00:53:57.858000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd2cebc070 a2=0 a3=0 items=0 ppid=2198 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:57.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:57.977622 kubelet[2095]: I1101 00:53:57.977585 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-cni-bin-dir\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.977869 kubelet[2095]: I1101 00:53:57.977847 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-flexvol-driver-host\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.977966 kubelet[2095]: I1101 00:53:57.977951 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrk2\" (UniqueName: \"kubernetes.io/projected/702b248f-18ef-46b4-89e3-5aed8c0a547d-kube-api-access-vdrk2\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978081 kubelet[2095]: I1101 00:53:57.978066 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/702b248f-18ef-46b4-89e3-5aed8c0a547d-node-certs\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978170 kubelet[2095]: I1101 00:53:57.978156 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-policysync\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978251 kubelet[2095]: I1101 00:53:57.978237 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-var-lib-calico\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978331 kubelet[2095]: I1101 00:53:57.978317 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-var-run-calico\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978419 kubelet[2095]: I1101 00:53:57.978403 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-cni-net-dir\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978492 kubelet[2095]: I1101 00:53:57.978478 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-lib-modules\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978570 kubelet[2095]: I1101 00:53:57.978555 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-cni-log-dir\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978673 kubelet[2095]: I1101 00:53:57.978655 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/702b248f-18ef-46b4-89e3-5aed8c0a547d-tigera-ca-bundle\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:57.978794 kubelet[2095]: I1101 00:53:57.978743 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/702b248f-18ef-46b4-89e3-5aed8c0a547d-xtables-lock\") pod \"calico-node-zfgvj\" (UID: \"702b248f-18ef-46b4-89e3-5aed8c0a547d\") " pod="calico-system/calico-node-zfgvj" Nov 1 00:53:58.090333 kubelet[2095]: E1101 00:53:58.090015 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:58.091842 env[1305]: time="2025-11-01T00:53:58.091647380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf7d48c8-vc5lg,Uid:87b8d77d-1ebe-40e2-a047-51a5c4ed0c79,Namespace:calico-system,Attempt:0,}" Nov 1 00:53:58.094347 kubelet[2095]: E1101 00:53:58.094324 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.094525 kubelet[2095]: W1101 00:53:58.094507 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.095309 kubelet[2095]: E1101 00:53:58.095283 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.110437 kubelet[2095]: E1101 00:53:58.110415 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.110598 kubelet[2095]: W1101 00:53:58.110578 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.110835 kubelet[2095]: E1101 00:53:58.110783 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.131593 kubelet[2095]: E1101 00:53:58.131554 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:53:58.144711 env[1305]: time="2025-11-01T00:53:58.144606883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:58.145614 env[1305]: time="2025-11-01T00:53:58.145565093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:58.145825 env[1305]: time="2025-11-01T00:53:58.145787447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:58.146144 env[1305]: time="2025-11-01T00:53:58.146115259Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f6fbe4a8cee59edad5d0df02c4d3bea6ae6ab09abe511c0a2319d98f75cce2b pid=2515 runtime=io.containerd.runc.v2 Nov 1 00:53:58.169741 kubelet[2095]: E1101 00:53:58.169585 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.169741 kubelet[2095]: W1101 00:53:58.169608 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.169741 kubelet[2095]: E1101 00:53:58.169628 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.172152 kubelet[2095]: E1101 00:53:58.172021 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.172152 kubelet[2095]: W1101 00:53:58.172037 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.172152 kubelet[2095]: E1101 00:53:58.172053 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.176145 kubelet[2095]: E1101 00:53:58.175907 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.176145 kubelet[2095]: W1101 00:53:58.175923 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.176145 kubelet[2095]: E1101 00:53:58.175937 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.176547 kubelet[2095]: E1101 00:53:58.176405 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.176547 kubelet[2095]: W1101 00:53:58.176421 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.176547 kubelet[2095]: E1101 00:53:58.176436 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.176868 kubelet[2095]: E1101 00:53:58.176778 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.176868 kubelet[2095]: W1101 00:53:58.176790 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.176868 kubelet[2095]: E1101 00:53:58.176802 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.177326 kubelet[2095]: E1101 00:53:58.177233 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.177326 kubelet[2095]: W1101 00:53:58.177246 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.177326 kubelet[2095]: E1101 00:53:58.177259 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177566 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178408 kubelet[2095]: W1101 00:53:58.177577 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177588 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177720 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178408 kubelet[2095]: W1101 00:53:58.177727 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177735 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177901 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178408 kubelet[2095]: W1101 00:53:58.177908 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.177917 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178408 kubelet[2095]: E1101 00:53:58.178029 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178782 kubelet[2095]: W1101 00:53:58.178035 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178782 kubelet[2095]: E1101 00:53:58.178042 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178782 kubelet[2095]: E1101 00:53:58.178152 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178782 kubelet[2095]: W1101 00:53:58.178158 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178782 kubelet[2095]: E1101 00:53:58.178164 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.178782 kubelet[2095]: E1101 00:53:58.178321 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.178782 kubelet[2095]: W1101 00:53:58.178329 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.178782 kubelet[2095]: E1101 00:53:58.178338 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179136 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.179561 kubelet[2095]: W1101 00:53:58.179148 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179159 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179311 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.179561 kubelet[2095]: W1101 00:53:58.179317 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179324 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179465 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.179561 kubelet[2095]: W1101 00:53:58.179472 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.179561 kubelet[2095]: E1101 00:53:58.179480 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.180126 kubelet[2095]: E1101 00:53:58.179973 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.180126 kubelet[2095]: W1101 00:53:58.179984 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.180126 kubelet[2095]: E1101 00:53:58.179994 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.180380 kubelet[2095]: E1101 00:53:58.180291 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.180380 kubelet[2095]: W1101 00:53:58.180301 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.180380 kubelet[2095]: E1101 00:53:58.180312 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.180631 kubelet[2095]: E1101 00:53:58.180614 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.180779 kubelet[2095]: W1101 00:53:58.180736 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.180865 kubelet[2095]: E1101 00:53:58.180849 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.181126 kubelet[2095]: E1101 00:53:58.181113 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.181230 kubelet[2095]: W1101 00:53:58.181215 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.181321 kubelet[2095]: E1101 00:53:58.181305 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.181619 kubelet[2095]: E1101 00:53:58.181604 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.181716 kubelet[2095]: W1101 00:53:58.181701 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.182024 kubelet[2095]: E1101 00:53:58.182007 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.182585 kubelet[2095]: E1101 00:53:58.182569 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.182726 kubelet[2095]: W1101 00:53:58.182707 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.182882 kubelet[2095]: E1101 00:53:58.182866 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.182992 kubelet[2095]: I1101 00:53:58.182975 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b08705e4-7a04-4c33-a8c8-a3f67298574d-kubelet-dir\") pod \"csi-node-driver-twt7m\" (UID: \"b08705e4-7a04-4c33-a8c8-a3f67298574d\") " pod="calico-system/csi-node-driver-twt7m" Nov 1 00:53:58.183267 kubelet[2095]: E1101 00:53:58.183253 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.183360 kubelet[2095]: W1101 00:53:58.183346 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.183448 kubelet[2095]: E1101 00:53:58.183435 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.183555 kubelet[2095]: I1101 00:53:58.183540 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b08705e4-7a04-4c33-a8c8-a3f67298574d-registration-dir\") pod \"csi-node-driver-twt7m\" (UID: \"b08705e4-7a04-4c33-a8c8-a3f67298574d\") " pod="calico-system/csi-node-driver-twt7m" Nov 1 00:53:58.183886 kubelet[2095]: E1101 00:53:58.183872 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.183993 kubelet[2095]: W1101 00:53:58.183979 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.184087 kubelet[2095]: E1101 00:53:58.184074 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.184195 kubelet[2095]: I1101 00:53:58.184181 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b08705e4-7a04-4c33-a8c8-a3f67298574d-socket-dir\") pod \"csi-node-driver-twt7m\" (UID: \"b08705e4-7a04-4c33-a8c8-a3f67298574d\") " pod="calico-system/csi-node-driver-twt7m" Nov 1 00:53:58.184448 kubelet[2095]: E1101 00:53:58.184435 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.184530 kubelet[2095]: W1101 00:53:58.184517 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.184619 kubelet[2095]: E1101 00:53:58.184606 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.184711 kubelet[2095]: I1101 00:53:58.184697 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b08705e4-7a04-4c33-a8c8-a3f67298574d-varrun\") pod \"csi-node-driver-twt7m\" (UID: \"b08705e4-7a04-4c33-a8c8-a3f67298574d\") " pod="calico-system/csi-node-driver-twt7m" Nov 1 00:53:58.185004 kubelet[2095]: E1101 00:53:58.184990 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.185600 kubelet[2095]: W1101 00:53:58.185568 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.185720 kubelet[2095]: E1101 00:53:58.185706 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.185830 kubelet[2095]: I1101 00:53:58.185816 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgc7f\" (UniqueName: \"kubernetes.io/projected/b08705e4-7a04-4c33-a8c8-a3f67298574d-kube-api-access-qgc7f\") pod \"csi-node-driver-twt7m\" (UID: \"b08705e4-7a04-4c33-a8c8-a3f67298574d\") " pod="calico-system/csi-node-driver-twt7m" Nov 1 00:53:58.186141 kubelet[2095]: E1101 00:53:58.186127 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.186292 kubelet[2095]: W1101 00:53:58.186273 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.186383 kubelet[2095]: E1101 00:53:58.186370 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.187972 kubelet[2095]: E1101 00:53:58.187920 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.188083 kubelet[2095]: W1101 00:53:58.188063 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.188202 kubelet[2095]: E1101 00:53:58.188185 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.188479 kubelet[2095]: E1101 00:53:58.188465 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.188574 kubelet[2095]: W1101 00:53:58.188558 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.188935 kubelet[2095]: E1101 00:53:58.188914 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.189265 kubelet[2095]: E1101 00:53:58.189251 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.189368 kubelet[2095]: W1101 00:53:58.189351 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.189482 kubelet[2095]: E1101 00:53:58.189467 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.189815 kubelet[2095]: E1101 00:53:58.189801 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.189955 kubelet[2095]: W1101 00:53:58.189938 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.190067 kubelet[2095]: E1101 00:53:58.190052 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.190328 kubelet[2095]: E1101 00:53:58.190315 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.190426 kubelet[2095]: W1101 00:53:58.190404 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.190525 kubelet[2095]: E1101 00:53:58.190510 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.190794 kubelet[2095]: E1101 00:53:58.190781 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.190921 kubelet[2095]: W1101 00:53:58.190905 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.191031 kubelet[2095]: E1101 00:53:58.191015 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.191368 kubelet[2095]: E1101 00:53:58.191354 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.191475 kubelet[2095]: W1101 00:53:58.191459 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.191573 kubelet[2095]: E1101 00:53:58.191557 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.193161 kubelet[2095]: E1101 00:53:58.193145 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.193288 kubelet[2095]: W1101 00:53:58.193270 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.193398 kubelet[2095]: E1101 00:53:58.193382 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.193732 kubelet[2095]: E1101 00:53:58.193717 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.193873 kubelet[2095]: W1101 00:53:58.193856 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.193976 kubelet[2095]: E1101 00:53:58.193960 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.235920 env[1305]: time="2025-11-01T00:53:58.235873889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf7d48c8-vc5lg,Uid:87b8d77d-1ebe-40e2-a047-51a5c4ed0c79,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f6fbe4a8cee59edad5d0df02c4d3bea6ae6ab09abe511c0a2319d98f75cce2b\"" Nov 1 00:53:58.237222 kubelet[2095]: E1101 00:53:58.236780 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:58.238268 env[1305]: time="2025-11-01T00:53:58.238242165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:53:58.267701 kubelet[2095]: E1101 00:53:58.267282 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:58.268215 env[1305]: time="2025-11-01T00:53:58.268179990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zfgvj,Uid:702b248f-18ef-46b4-89e3-5aed8c0a547d,Namespace:calico-system,Attempt:0,}" Nov 1 00:53:58.284991 env[1305]: time="2025-11-01T00:53:58.284921352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:53:58.285159 env[1305]: time="2025-11-01T00:53:58.285132140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:53:58.285253 env[1305]: time="2025-11-01T00:53:58.285229762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:53:58.285882 env[1305]: time="2025-11-01T00:53:58.285816718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580 pid=2601 runtime=io.containerd.runc.v2 Nov 1 00:53:58.287268 kubelet[2095]: E1101 00:53:58.287043 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.287268 kubelet[2095]: W1101 00:53:58.287061 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.287268 kubelet[2095]: E1101 00:53:58.287098 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.287653 kubelet[2095]: E1101 00:53:58.287478 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.287653 kubelet[2095]: W1101 00:53:58.287490 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.287653 kubelet[2095]: E1101 00:53:58.287527 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.288010 kubelet[2095]: E1101 00:53:58.287853 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.288010 kubelet[2095]: W1101 00:53:58.287863 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.288010 kubelet[2095]: E1101 00:53:58.287882 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.288283 kubelet[2095]: E1101 00:53:58.288165 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.288283 kubelet[2095]: W1101 00:53:58.288175 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.288283 kubelet[2095]: E1101 00:53:58.288189 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.288437 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.289494 kubelet[2095]: W1101 00:53:58.288448 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.288457 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.288649 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.289494 kubelet[2095]: W1101 00:53:58.288656 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.288675 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.288896 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.289494 kubelet[2095]: W1101 00:53:58.288904 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.289055 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.289494 kubelet[2095]: E1101 00:53:58.289300 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.290085 kubelet[2095]: W1101 00:53:58.289309 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.290085 kubelet[2095]: E1101 00:53:58.289396 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.290085 kubelet[2095]: E1101 00:53:58.289830 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.290085 kubelet[2095]: W1101 00:53:58.289840 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.290085 kubelet[2095]: E1101 00:53:58.289924 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.290411 kubelet[2095]: E1101 00:53:58.290373 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.290614 kubelet[2095]: W1101 00:53:58.290518 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.290706 kubelet[2095]: E1101 00:53:58.290692 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.290941 kubelet[2095]: E1101 00:53:58.290919 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.291027 kubelet[2095]: W1101 00:53:58.291013 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.291193 kubelet[2095]: E1101 00:53:58.291180 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.291394 kubelet[2095]: E1101 00:53:58.291383 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.291504 kubelet[2095]: W1101 00:53:58.291490 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.292273 kubelet[2095]: E1101 00:53:58.292254 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.292524 kubelet[2095]: E1101 00:53:58.292511 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.292636 kubelet[2095]: W1101 00:53:58.292618 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.292842 kubelet[2095]: E1101 00:53:58.292811 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.293105 kubelet[2095]: E1101 00:53:58.293090 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.293204 kubelet[2095]: W1101 00:53:58.293186 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.293403 kubelet[2095]: E1101 00:53:58.293387 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.293589 kubelet[2095]: E1101 00:53:58.293578 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.293663 kubelet[2095]: W1101 00:53:58.293649 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.293800 kubelet[2095]: E1101 00:53:58.293787 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.295055 kubelet[2095]: E1101 00:53:58.295012 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.295406 kubelet[2095]: W1101 00:53:58.295385 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.296719 kubelet[2095]: E1101 00:53:58.296697 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.296866 kubelet[2095]: W1101 00:53:58.296848 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.300999 kubelet[2095]: E1101 00:53:58.300979 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.301167 kubelet[2095]: E1101 00:53:58.301153 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.301985 kubelet[2095]: E1101 00:53:58.301965 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.302078 kubelet[2095]: W1101 00:53:58.302063 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.302342 kubelet[2095]: E1101 00:53:58.302327 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.302546 kubelet[2095]: E1101 00:53:58.302534 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.302618 kubelet[2095]: W1101 00:53:58.302605 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.302771 kubelet[2095]: E1101 00:53:58.302734 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.302967 kubelet[2095]: E1101 00:53:58.302955 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.303356 kubelet[2095]: W1101 00:53:58.303337 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.303525 kubelet[2095]: E1101 00:53:58.303512 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.303693 kubelet[2095]: E1101 00:53:58.303682 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.303786 kubelet[2095]: W1101 00:53:58.303772 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.303952 kubelet[2095]: E1101 00:53:58.303938 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.304191 kubelet[2095]: E1101 00:53:58.304170 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.304339 kubelet[2095]: W1101 00:53:58.304326 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.304569 kubelet[2095]: E1101 00:53:58.304545 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.305186 kubelet[2095]: E1101 00:53:58.305170 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.305314 kubelet[2095]: W1101 00:53:58.305297 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.306229 kubelet[2095]: E1101 00:53:58.306207 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.306614 kubelet[2095]: E1101 00:53:58.306598 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.306705 kubelet[2095]: W1101 00:53:58.306691 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.306929 kubelet[2095]: E1101 00:53:58.306916 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.307199 kubelet[2095]: E1101 00:53:58.307186 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.307338 kubelet[2095]: W1101 00:53:58.307298 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.307574 kubelet[2095]: E1101 00:53:58.307558 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.307912 kubelet[2095]: E1101 00:53:58.307899 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:53:58.308213 kubelet[2095]: W1101 00:53:58.308199 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:53:58.308734 kubelet[2095]: E1101 00:53:58.308719 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:53:58.339233 env[1305]: time="2025-11-01T00:53:58.339189808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zfgvj,Uid:702b248f-18ef-46b4-89e3-5aed8c0a547d,Namespace:calico-system,Attempt:0,} returns sandbox id \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\"" Nov 1 00:53:58.340997 kubelet[2095]: E1101 00:53:58.339945 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:53:58.924000 audit[2665]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:58.924000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdc60171e0 a2=0 a3=7ffdc60171cc items=0 ppid=2198 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:58.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:58.930000 audit[2665]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:53:58.930000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdc60171e0 a2=0 a3=0 items=0 ppid=2198 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:53:58.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:53:59.466179 kubelet[2095]: E1101 00:53:59.465940 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:53:59.495215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount825901061.mount: Deactivated successfully. Nov 1 00:54:00.945551 env[1305]: time="2025-11-01T00:54:00.945491275Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:00.947829 env[1305]: time="2025-11-01T00:54:00.947799431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:00.949460 env[1305]: time="2025-11-01T00:54:00.949435107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:00.951041 env[1305]: time="2025-11-01T00:54:00.951016227Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:00.951639 env[1305]: time="2025-11-01T00:54:00.951612798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:54:00.958535 env[1305]: time="2025-11-01T00:54:00.958482640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:54:01.061688 env[1305]: time="2025-11-01T00:54:01.061503825Z" level=info msg="CreateContainer within sandbox \"5f6fbe4a8cee59edad5d0df02c4d3bea6ae6ab09abe511c0a2319d98f75cce2b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:54:01.077123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505999309.mount: Deactivated successfully. Nov 1 00:54:01.081789 env[1305]: time="2025-11-01T00:54:01.081662761Z" level=info msg="CreateContainer within sandbox \"5f6fbe4a8cee59edad5d0df02c4d3bea6ae6ab09abe511c0a2319d98f75cce2b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"41cca09c19b0381ec93ebf24f9b495d5ff571c1c90f28a1bde7b92b018594db0\"" Nov 1 00:54:01.083671 env[1305]: time="2025-11-01T00:54:01.083638548Z" level=info msg="StartContainer for \"41cca09c19b0381ec93ebf24f9b495d5ff571c1c90f28a1bde7b92b018594db0\"" Nov 1 00:54:01.181443 env[1305]: time="2025-11-01T00:54:01.178671689Z" level=info msg="StartContainer for \"41cca09c19b0381ec93ebf24f9b495d5ff571c1c90f28a1bde7b92b018594db0\" returns successfully" Nov 1 00:54:01.466990 kubelet[2095]: E1101 00:54:01.466938 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:01.565036 kubelet[2095]: E1101 00:54:01.564981 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:01.585664 kubelet[2095]: I1101 00:54:01.585602 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf7d48c8-vc5lg" podStartSLOduration=1.8650787850000001 podStartE2EDuration="4.585586154s" podCreationTimestamp="2025-11-01 00:53:57 +0000 UTC" firstStartedPulling="2025-11-01 00:53:58.237745957 +0000 UTC m=+21.995841539" lastFinishedPulling="2025-11-01 00:54:00.958253331 +0000 UTC m=+24.716348908" observedRunningTime="2025-11-01 00:54:01.58537038 +0000 UTC m=+25.343465982" watchObservedRunningTime="2025-11-01 00:54:01.585586154 +0000 UTC m=+25.343681761" Nov 1 00:54:01.630360 kubelet[2095]: E1101 00:54:01.630311 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.630545 kubelet[2095]: W1101 00:54:01.630399 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.630545 kubelet[2095]: E1101 00:54:01.630426 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.630829 kubelet[2095]: E1101 00:54:01.630810 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.630829 kubelet[2095]: W1101 00:54:01.630822 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.630829 kubelet[2095]: E1101 00:54:01.630831 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.631166 kubelet[2095]: E1101 00:54:01.631147 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.631238 kubelet[2095]: W1101 00:54:01.631174 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.631238 kubelet[2095]: E1101 00:54:01.631185 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.631459 kubelet[2095]: E1101 00:54:01.631432 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.631459 kubelet[2095]: W1101 00:54:01.631445 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.631459 kubelet[2095]: E1101 00:54:01.631454 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.631818 kubelet[2095]: E1101 00:54:01.631744 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.631818 kubelet[2095]: W1101 00:54:01.631810 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.631917 kubelet[2095]: E1101 00:54:01.631826 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.632069 kubelet[2095]: E1101 00:54:01.632052 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.632069 kubelet[2095]: W1101 00:54:01.632064 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.632144 kubelet[2095]: E1101 00:54:01.632072 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.632332 kubelet[2095]: E1101 00:54:01.632308 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.632332 kubelet[2095]: W1101 00:54:01.632322 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.632435 kubelet[2095]: E1101 00:54:01.632331 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.632592 kubelet[2095]: E1101 00:54:01.632577 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.632592 kubelet[2095]: W1101 00:54:01.632588 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.632675 kubelet[2095]: E1101 00:54:01.632596 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.632850 kubelet[2095]: E1101 00:54:01.632833 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.632850 kubelet[2095]: W1101 00:54:01.632847 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.632932 kubelet[2095]: E1101 00:54:01.632873 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.633081 kubelet[2095]: E1101 00:54:01.633065 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.633131 kubelet[2095]: W1101 00:54:01.633086 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.633131 kubelet[2095]: E1101 00:54:01.633095 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.633331 kubelet[2095]: E1101 00:54:01.633310 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.633386 kubelet[2095]: W1101 00:54:01.633343 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.633386 kubelet[2095]: E1101 00:54:01.633353 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.633537 kubelet[2095]: E1101 00:54:01.633505 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.633537 kubelet[2095]: W1101 00:54:01.633517 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.633617 kubelet[2095]: E1101 00:54:01.633545 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.633808 kubelet[2095]: E1101 00:54:01.633791 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.633808 kubelet[2095]: W1101 00:54:01.633804 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.633894 kubelet[2095]: E1101 00:54:01.633814 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.634063 kubelet[2095]: E1101 00:54:01.634044 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.634063 kubelet[2095]: W1101 00:54:01.634058 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.634146 kubelet[2095]: E1101 00:54:01.634107 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.634340 kubelet[2095]: E1101 00:54:01.634323 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.634340 kubelet[2095]: W1101 00:54:01.634335 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.634340 kubelet[2095]: E1101 00:54:01.634343 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.729784 kubelet[2095]: E1101 00:54:01.729664 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.729969 kubelet[2095]: W1101 00:54:01.729943 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.730057 kubelet[2095]: E1101 00:54:01.730041 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.730364 kubelet[2095]: E1101 00:54:01.730347 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.730474 kubelet[2095]: W1101 00:54:01.730458 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.730547 kubelet[2095]: E1101 00:54:01.730532 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.730795 kubelet[2095]: E1101 00:54:01.730782 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.730891 kubelet[2095]: W1101 00:54:01.730876 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.730963 kubelet[2095]: E1101 00:54:01.730950 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.731313 kubelet[2095]: E1101 00:54:01.731285 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.731313 kubelet[2095]: W1101 00:54:01.731307 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.731451 kubelet[2095]: E1101 00:54:01.731328 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.731535 kubelet[2095]: E1101 00:54:01.731518 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.731535 kubelet[2095]: W1101 00:54:01.731531 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.731624 kubelet[2095]: E1101 00:54:01.731547 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.731699 kubelet[2095]: E1101 00:54:01.731685 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.731699 kubelet[2095]: W1101 00:54:01.731695 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.731811 kubelet[2095]: E1101 00:54:01.731709 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.731903 kubelet[2095]: E1101 00:54:01.731887 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.731903 kubelet[2095]: W1101 00:54:01.731899 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.731903 kubelet[2095]: E1101 00:54:01.731907 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.732238 kubelet[2095]: E1101 00:54:01.732222 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.732344 kubelet[2095]: W1101 00:54:01.732325 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.732420 kubelet[2095]: E1101 00:54:01.732407 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.732662 kubelet[2095]: E1101 00:54:01.732649 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.732771 kubelet[2095]: W1101 00:54:01.732733 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.732853 kubelet[2095]: E1101 00:54:01.732839 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.733142 kubelet[2095]: E1101 00:54:01.733124 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.733142 kubelet[2095]: W1101 00:54:01.733141 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.733240 kubelet[2095]: E1101 00:54:01.733159 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.733348 kubelet[2095]: E1101 00:54:01.733333 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.733348 kubelet[2095]: W1101 00:54:01.733344 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.733432 kubelet[2095]: E1101 00:54:01.733359 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.733550 kubelet[2095]: E1101 00:54:01.733516 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.733550 kubelet[2095]: W1101 00:54:01.733550 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.733637 kubelet[2095]: E1101 00:54:01.733563 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.733827 kubelet[2095]: E1101 00:54:01.733765 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.733827 kubelet[2095]: W1101 00:54:01.733777 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.733827 kubelet[2095]: E1101 00:54:01.733800 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.734164 kubelet[2095]: E1101 00:54:01.734150 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.734257 kubelet[2095]: W1101 00:54:01.734242 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.734344 kubelet[2095]: E1101 00:54:01.734329 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.734555 kubelet[2095]: E1101 00:54:01.734536 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.734555 kubelet[2095]: W1101 00:54:01.734550 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.734642 kubelet[2095]: E1101 00:54:01.734568 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.734789 kubelet[2095]: E1101 00:54:01.734723 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.734789 kubelet[2095]: W1101 00:54:01.734734 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.734919 kubelet[2095]: E1101 00:54:01.734823 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.735034 kubelet[2095]: E1101 00:54:01.735015 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.735034 kubelet[2095]: W1101 00:54:01.735031 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.735130 kubelet[2095]: E1101 00:54:01.735044 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:01.735361 kubelet[2095]: E1101 00:54:01.735344 2095 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:54:01.735361 kubelet[2095]: W1101 00:54:01.735359 2095 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:54:01.735457 kubelet[2095]: E1101 00:54:01.735368 2095 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:54:02.335940 env[1305]: time="2025-11-01T00:54:02.335881517Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:02.337295 env[1305]: time="2025-11-01T00:54:02.337257369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:02.338856 env[1305]: time="2025-11-01T00:54:02.338828978Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:02.339846 env[1305]: time="2025-11-01T00:54:02.339813407Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:02.340252 env[1305]: time="2025-11-01T00:54:02.340225759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:54:02.344082 env[1305]: time="2025-11-01T00:54:02.344036854Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:54:02.357670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126995984.mount: Deactivated successfully. Nov 1 00:54:02.374972 env[1305]: time="2025-11-01T00:54:02.374901485Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516\"" Nov 1 00:54:02.376282 env[1305]: time="2025-11-01T00:54:02.376240623Z" level=info msg="StartContainer for \"e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516\"" Nov 1 00:54:02.467002 env[1305]: time="2025-11-01T00:54:02.466948001Z" level=info msg="StartContainer for \"e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516\" returns successfully" Nov 1 00:54:02.509780 env[1305]: time="2025-11-01T00:54:02.509707575Z" level=info msg="shim disconnected" id=e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516 Nov 1 00:54:02.510078 env[1305]: time="2025-11-01T00:54:02.510053436Z" level=warning msg="cleaning up after shim disconnected" id=e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516 namespace=k8s.io Nov 1 00:54:02.510198 env[1305]: time="2025-11-01T00:54:02.510180094Z" level=info msg="cleaning up dead shim" Nov 1 00:54:02.524044 env[1305]: time="2025-11-01T00:54:02.524002334Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:54:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2787 runtime=io.containerd.runc.v2\n" Nov 1 00:54:02.571780 kubelet[2095]: I1101 00:54:02.570929 2095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:54:02.571780 kubelet[2095]: E1101 00:54:02.571328 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:02.572693 kubelet[2095]: E1101 00:54:02.572507 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:02.576713 env[1305]: time="2025-11-01T00:54:02.576680829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:54:03.039461 systemd[1]: run-containerd-runc-k8s.io-e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516-runc.oCHXQW.mount: Deactivated successfully. Nov 1 00:54:03.039906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e31a00e17dd057d2f7042169d59f57e9ab125b3a7172abb7b4920d4b37c46516-rootfs.mount: Deactivated successfully. Nov 1 00:54:03.465846 kubelet[2095]: E1101 00:54:03.465793 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:05.466296 kubelet[2095]: E1101 00:54:05.466240 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:05.979294 env[1305]: time="2025-11-01T00:54:05.979246749Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:05.981561 env[1305]: time="2025-11-01T00:54:05.981469329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:05.982686 env[1305]: time="2025-11-01T00:54:05.982652788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:05.984480 env[1305]: time="2025-11-01T00:54:05.984453100Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:05.986627 env[1305]: time="2025-11-01T00:54:05.986596434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:54:05.990190 env[1305]: time="2025-11-01T00:54:05.990146793Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:54:06.004222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2588981519.mount: Deactivated successfully. Nov 1 00:54:06.009890 env[1305]: time="2025-11-01T00:54:06.009854438Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05\"" Nov 1 00:54:06.010728 env[1305]: time="2025-11-01T00:54:06.010702919Z" level=info msg="StartContainer for \"26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05\"" Nov 1 00:54:06.082787 env[1305]: time="2025-11-01T00:54:06.082727214Z" level=info msg="StartContainer for \"26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05\" returns successfully" Nov 1 00:54:06.582404 kubelet[2095]: E1101 00:54:06.582359 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:06.727808 env[1305]: time="2025-11-01T00:54:06.727745106Z" level=info msg="shim disconnected" id=26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05 Nov 1 00:54:06.728102 env[1305]: time="2025-11-01T00:54:06.728081202Z" level=warning msg="cleaning up after shim disconnected" id=26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05 namespace=k8s.io Nov 1 00:54:06.728222 env[1305]: time="2025-11-01T00:54:06.728203925Z" level=info msg="cleaning up dead shim" Nov 1 00:54:06.738455 env[1305]: time="2025-11-01T00:54:06.738370936Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:54:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2856 runtime=io.containerd.runc.v2\n" Nov 1 00:54:06.811380 kubelet[2095]: I1101 00:54:06.811248 2095 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:54:06.967740 kubelet[2095]: I1101 00:54:06.967685 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0979e255-e4e9-4664-a95e-5354a9f7d531-tigera-ca-bundle\") pod \"calico-kube-controllers-85b568d67d-z4c8c\" (UID: \"0979e255-e4e9-4664-a95e-5354a9f7d531\") " pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" Nov 1 00:54:06.967740 kubelet[2095]: I1101 00:54:06.967731 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91b31c91-0235-44c1-8490-69cf1d3604f2-config-volume\") pod \"coredns-668d6bf9bc-hbw54\" (UID: \"91b31c91-0235-44c1-8490-69cf1d3604f2\") " pod="kube-system/coredns-668d6bf9bc-hbw54" Nov 1 00:54:06.967964 kubelet[2095]: I1101 00:54:06.967771 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krgp\" (UniqueName: \"kubernetes.io/projected/91b31c91-0235-44c1-8490-69cf1d3604f2-kube-api-access-6krgp\") pod \"coredns-668d6bf9bc-hbw54\" (UID: \"91b31c91-0235-44c1-8490-69cf1d3604f2\") " pod="kube-system/coredns-668d6bf9bc-hbw54" Nov 1 00:54:06.967964 kubelet[2095]: I1101 00:54:06.967790 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04f4ba43-b773-4444-b355-28563af8171b-config-volume\") pod \"coredns-668d6bf9bc-95hcw\" (UID: \"04f4ba43-b773-4444-b355-28563af8171b\") " pod="kube-system/coredns-668d6bf9bc-95hcw" Nov 1 00:54:06.967964 kubelet[2095]: I1101 00:54:06.967806 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fprsn\" (UniqueName: \"kubernetes.io/projected/04f4ba43-b773-4444-b355-28563af8171b-kube-api-access-fprsn\") pod \"coredns-668d6bf9bc-95hcw\" (UID: \"04f4ba43-b773-4444-b355-28563af8171b\") " pod="kube-system/coredns-668d6bf9bc-95hcw" Nov 1 00:54:06.967964 kubelet[2095]: I1101 00:54:06.967826 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmmkj\" (UniqueName: \"kubernetes.io/projected/0979e255-e4e9-4664-a95e-5354a9f7d531-kube-api-access-bmmkj\") pod \"calico-kube-controllers-85b568d67d-z4c8c\" (UID: \"0979e255-e4e9-4664-a95e-5354a9f7d531\") " pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" Nov 1 00:54:07.001110 systemd[1]: run-containerd-runc-k8s.io-26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05-runc.4LeO2V.mount: Deactivated successfully. Nov 1 00:54:07.001267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26331d7644b5bc51a1f32fca7051e2cf376f4c8e71ea203e8b7a00d4a6288c05-rootfs.mount: Deactivated successfully. Nov 1 00:54:07.068168 kubelet[2095]: I1101 00:54:07.068116 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf47117-3eb1-4aa3-89a4-bc9fecdad703-goldmane-ca-bundle\") pod \"goldmane-666569f655-j9dnh\" (UID: \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\") " pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.068466 kubelet[2095]: I1101 00:54:07.068434 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/acf47117-3eb1-4aa3-89a4-bc9fecdad703-goldmane-key-pair\") pod \"goldmane-666569f655-j9dnh\" (UID: \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\") " pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.068612 kubelet[2095]: I1101 00:54:07.068586 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-ca-bundle\") pod \"whisker-56478f7ccd-qkwt8\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " pod="calico-system/whisker-56478f7ccd-qkwt8" Nov 1 00:54:07.068714 kubelet[2095]: I1101 00:54:07.068699 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0aeb6ff7-2d7d-423c-8068-1607bda1ebe8-calico-apiserver-certs\") pod \"calico-apiserver-5f668d4ccf-gzvhz\" (UID: \"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8\") " pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" Nov 1 00:54:07.068876 kubelet[2095]: I1101 00:54:07.068860 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj6f7\" (UniqueName: \"kubernetes.io/projected/0aeb6ff7-2d7d-423c-8068-1607bda1ebe8-kube-api-access-gj6f7\") pod \"calico-apiserver-5f668d4ccf-gzvhz\" (UID: \"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8\") " pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" Nov 1 00:54:07.068996 kubelet[2095]: I1101 00:54:07.068981 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-backend-key-pair\") pod \"whisker-56478f7ccd-qkwt8\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " pod="calico-system/whisker-56478f7ccd-qkwt8" Nov 1 00:54:07.069102 kubelet[2095]: I1101 00:54:07.069086 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79wrz\" (UniqueName: \"kubernetes.io/projected/447c37d4-c1de-4035-a57b-b729047ea7fb-kube-api-access-79wrz\") pod \"calico-apiserver-5f668d4ccf-fmsxj\" (UID: \"447c37d4-c1de-4035-a57b-b729047ea7fb\") " pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" Nov 1 00:54:07.069220 kubelet[2095]: I1101 00:54:07.069203 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfdv\" (UniqueName: \"kubernetes.io/projected/acf47117-3eb1-4aa3-89a4-bc9fecdad703-kube-api-access-vvfdv\") pod \"goldmane-666569f655-j9dnh\" (UID: \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\") " pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.069326 kubelet[2095]: I1101 00:54:07.069310 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/447c37d4-c1de-4035-a57b-b729047ea7fb-calico-apiserver-certs\") pod \"calico-apiserver-5f668d4ccf-fmsxj\" (UID: \"447c37d4-c1de-4035-a57b-b729047ea7fb\") " pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" Nov 1 00:54:07.069436 kubelet[2095]: I1101 00:54:07.069420 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/acf47117-3eb1-4aa3-89a4-bc9fecdad703-config\") pod \"goldmane-666569f655-j9dnh\" (UID: \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\") " pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.069579 kubelet[2095]: I1101 00:54:07.069563 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz8qg\" (UniqueName: \"kubernetes.io/projected/749cb733-2d74-4165-b209-f5d9ea430e96-kube-api-access-qz8qg\") pod \"whisker-56478f7ccd-qkwt8\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " pod="calico-system/whisker-56478f7ccd-qkwt8" Nov 1 00:54:07.144396 kubelet[2095]: E1101 00:54:07.144166 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:07.146395 env[1305]: time="2025-11-01T00:54:07.145138924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95hcw,Uid:04f4ba43-b773-4444-b355-28563af8171b,Namespace:kube-system,Attempt:0,}" Nov 1 00:54:07.168723 env[1305]: time="2025-11-01T00:54:07.168653932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b568d67d-z4c8c,Uid:0979e255-e4e9-4664-a95e-5354a9f7d531,Namespace:calico-system,Attempt:0,}" Nov 1 00:54:07.191622 kubelet[2095]: E1101 00:54:07.180218 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:07.192102 env[1305]: time="2025-11-01T00:54:07.192067770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbw54,Uid:91b31c91-0235-44c1-8490-69cf1d3604f2,Namespace:kube-system,Attempt:0,}" Nov 1 00:54:07.323173 env[1305]: time="2025-11-01T00:54:07.322428447Z" level=error msg="Failed to destroy network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.323775 env[1305]: time="2025-11-01T00:54:07.323716946Z" level=error msg="encountered an error cleaning up failed sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.323933 env[1305]: time="2025-11-01T00:54:07.323903250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95hcw,Uid:04f4ba43-b773-4444-b355-28563af8171b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.324337 kubelet[2095]: E1101 00:54:07.324289 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.324458 kubelet[2095]: E1101 00:54:07.324390 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-95hcw" Nov 1 00:54:07.324458 kubelet[2095]: E1101 00:54:07.324426 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-95hcw" Nov 1 00:54:07.324525 kubelet[2095]: E1101 00:54:07.324477 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-95hcw_kube-system(04f4ba43-b773-4444-b355-28563af8171b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-95hcw_kube-system(04f4ba43-b773-4444-b355-28563af8171b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-95hcw" podUID="04f4ba43-b773-4444-b355-28563af8171b" Nov 1 00:54:07.351114 env[1305]: time="2025-11-01T00:54:07.351042062Z" level=error msg="Failed to destroy network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.351423 env[1305]: time="2025-11-01T00:54:07.351389308Z" level=error msg="encountered an error cleaning up failed sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.351505 env[1305]: time="2025-11-01T00:54:07.351446722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b568d67d-z4c8c,Uid:0979e255-e4e9-4664-a95e-5354a9f7d531,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.351693 kubelet[2095]: E1101 00:54:07.351658 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.351868 kubelet[2095]: E1101 00:54:07.351720 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" Nov 1 00:54:07.351868 kubelet[2095]: E1101 00:54:07.351743 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" Nov 1 00:54:07.351967 kubelet[2095]: E1101 00:54:07.351874 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85b568d67d-z4c8c_calico-system(0979e255-e4e9-4664-a95e-5354a9f7d531)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85b568d67d-z4c8c_calico-system(0979e255-e4e9-4664-a95e-5354a9f7d531)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:07.353485 env[1305]: time="2025-11-01T00:54:07.353429843Z" level=error msg="Failed to destroy network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.353968 env[1305]: time="2025-11-01T00:54:07.353878840Z" level=error msg="encountered an error cleaning up failed sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.354099 env[1305]: time="2025-11-01T00:54:07.354071110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbw54,Uid:91b31c91-0235-44c1-8490-69cf1d3604f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.354471 kubelet[2095]: E1101 00:54:07.354310 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.354471 kubelet[2095]: E1101 00:54:07.354352 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hbw54" Nov 1 00:54:07.354471 kubelet[2095]: E1101 00:54:07.354375 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hbw54" Nov 1 00:54:07.354622 kubelet[2095]: E1101 00:54:07.354409 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hbw54_kube-system(91b31c91-0235-44c1-8490-69cf1d3604f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hbw54_kube-system(91b31c91-0235-44c1-8490-69cf1d3604f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hbw54" podUID="91b31c91-0235-44c1-8490-69cf1d3604f2" Nov 1 00:54:07.470102 env[1305]: time="2025-11-01T00:54:07.470044960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twt7m,Uid:b08705e4-7a04-4c33-a8c8-a3f67298574d,Namespace:calico-system,Attempt:0,}" Nov 1 00:54:07.486551 env[1305]: time="2025-11-01T00:54:07.486494420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j9dnh,Uid:acf47117-3eb1-4aa3-89a4-bc9fecdad703,Namespace:calico-system,Attempt:0,}" Nov 1 00:54:07.486851 env[1305]: time="2025-11-01T00:54:07.486819394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56478f7ccd-qkwt8,Uid:749cb733-2d74-4165-b209-f5d9ea430e96,Namespace:calico-system,Attempt:0,}" Nov 1 00:54:07.487712 env[1305]: time="2025-11-01T00:54:07.487675904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-gzvhz,Uid:0aeb6ff7-2d7d-423c-8068-1607bda1ebe8,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:54:07.491711 env[1305]: time="2025-11-01T00:54:07.491664987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-fmsxj,Uid:447c37d4-c1de-4035-a57b-b729047ea7fb,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:54:07.586911 kubelet[2095]: I1101 00:54:07.586017 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:07.589128 env[1305]: time="2025-11-01T00:54:07.589058643Z" level=info msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" Nov 1 00:54:07.594382 kubelet[2095]: E1101 00:54:07.592881 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:07.598089 env[1305]: time="2025-11-01T00:54:07.598032926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:54:07.609200 kubelet[2095]: I1101 00:54:07.609172 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:07.613531 env[1305]: time="2025-11-01T00:54:07.612236886Z" level=info msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" Nov 1 00:54:07.618238 env[1305]: time="2025-11-01T00:54:07.618192975Z" level=error msg="Failed to destroy network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.618620 env[1305]: time="2025-11-01T00:54:07.618580551Z" level=error msg="encountered an error cleaning up failed sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.618715 env[1305]: time="2025-11-01T00:54:07.618631580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twt7m,Uid:b08705e4-7a04-4c33-a8c8-a3f67298574d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.618942 kubelet[2095]: I1101 00:54:07.618920 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:07.620896 kubelet[2095]: E1101 00:54:07.620832 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.621077 kubelet[2095]: E1101 00:54:07.621049 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twt7m" Nov 1 00:54:07.621283 kubelet[2095]: E1101 00:54:07.621162 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-twt7m" Nov 1 00:54:07.621456 kubelet[2095]: E1101 00:54:07.621420 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:07.621780 env[1305]: time="2025-11-01T00:54:07.621728671Z" level=info msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" Nov 1 00:54:07.688708 env[1305]: time="2025-11-01T00:54:07.688637896Z" level=error msg="Failed to destroy network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.692430 env[1305]: time="2025-11-01T00:54:07.692380158Z" level=error msg="encountered an error cleaning up failed sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.692555 env[1305]: time="2025-11-01T00:54:07.692444187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j9dnh,Uid:acf47117-3eb1-4aa3-89a4-bc9fecdad703,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.692681 kubelet[2095]: E1101 00:54:07.692642 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.692838 kubelet[2095]: E1101 00:54:07.692702 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.692838 kubelet[2095]: E1101 00:54:07.692729 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j9dnh" Nov 1 00:54:07.692838 kubelet[2095]: E1101 00:54:07.692809 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:07.763126 env[1305]: time="2025-11-01T00:54:07.763044916Z" level=error msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" failed" error="failed to destroy network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.763659 kubelet[2095]: E1101 00:54:07.763614 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:07.763836 kubelet[2095]: E1101 00:54:07.763688 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa"} Nov 1 00:54:07.763836 kubelet[2095]: E1101 00:54:07.763771 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91b31c91-0235-44c1-8490-69cf1d3604f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:07.763836 kubelet[2095]: E1101 00:54:07.763795 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91b31c91-0235-44c1-8490-69cf1d3604f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hbw54" podUID="91b31c91-0235-44c1-8490-69cf1d3604f2" Nov 1 00:54:07.766163 env[1305]: time="2025-11-01T00:54:07.766107612Z" level=error msg="Failed to destroy network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.767688 env[1305]: time="2025-11-01T00:54:07.767617485Z" level=error msg="encountered an error cleaning up failed sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.767832 env[1305]: time="2025-11-01T00:54:07.767700921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-gzvhz,Uid:0aeb6ff7-2d7d-423c-8068-1607bda1ebe8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.768151 kubelet[2095]: E1101 00:54:07.768093 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.768261 kubelet[2095]: E1101 00:54:07.768174 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" Nov 1 00:54:07.768261 kubelet[2095]: E1101 00:54:07.768216 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" Nov 1 00:54:07.768354 kubelet[2095]: E1101 00:54:07.768275 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:07.794897 env[1305]: time="2025-11-01T00:54:07.794825541Z" level=error msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" failed" error="failed to destroy network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.795564 kubelet[2095]: E1101 00:54:07.795360 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:07.795564 kubelet[2095]: E1101 00:54:07.795427 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf"} Nov 1 00:54:07.795564 kubelet[2095]: E1101 00:54:07.795481 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04f4ba43-b773-4444-b355-28563af8171b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:07.795564 kubelet[2095]: E1101 00:54:07.795524 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04f4ba43-b773-4444-b355-28563af8171b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-95hcw" podUID="04f4ba43-b773-4444-b355-28563af8171b" Nov 1 00:54:07.796093 env[1305]: time="2025-11-01T00:54:07.795291136Z" level=error msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" failed" error="failed to destroy network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.796665 kubelet[2095]: E1101 00:54:07.796625 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:07.796797 kubelet[2095]: E1101 00:54:07.796670 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4"} Nov 1 00:54:07.796797 kubelet[2095]: E1101 00:54:07.796711 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0979e255-e4e9-4664-a95e-5354a9f7d531\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:07.796797 kubelet[2095]: E1101 00:54:07.796732 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0979e255-e4e9-4664-a95e-5354a9f7d531\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:07.803448 env[1305]: time="2025-11-01T00:54:07.803395726Z" level=error msg="Failed to destroy network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.804097 env[1305]: time="2025-11-01T00:54:07.804048753Z" level=error msg="encountered an error cleaning up failed sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.804297 env[1305]: time="2025-11-01T00:54:07.804245103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-fmsxj,Uid:447c37d4-c1de-4035-a57b-b729047ea7fb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.804721 kubelet[2095]: E1101 00:54:07.804662 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.804855 kubelet[2095]: E1101 00:54:07.804731 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" Nov 1 00:54:07.804855 kubelet[2095]: E1101 00:54:07.804798 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" Nov 1 00:54:07.804954 kubelet[2095]: E1101 00:54:07.804865 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f668d4ccf-fmsxj_calico-apiserver(447c37d4-c1de-4035-a57b-b729047ea7fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f668d4ccf-fmsxj_calico-apiserver(447c37d4-c1de-4035-a57b-b729047ea7fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:07.816866 env[1305]: time="2025-11-01T00:54:07.816800769Z" level=error msg="Failed to destroy network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.817235 env[1305]: time="2025-11-01T00:54:07.817199373Z" level=error msg="encountered an error cleaning up failed sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.817315 env[1305]: time="2025-11-01T00:54:07.817251859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56478f7ccd-qkwt8,Uid:749cb733-2d74-4165-b209-f5d9ea430e96,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.817483 kubelet[2095]: E1101 00:54:07.817445 2095 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:07.817594 kubelet[2095]: E1101 00:54:07.817536 2095 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56478f7ccd-qkwt8" Nov 1 00:54:07.817594 kubelet[2095]: E1101 00:54:07.817558 2095 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-56478f7ccd-qkwt8" Nov 1 00:54:07.817689 kubelet[2095]: E1101 00:54:07.817613 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-56478f7ccd-qkwt8_calico-system(749cb733-2d74-4165-b209-f5d9ea430e96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-56478f7ccd-qkwt8_calico-system(749cb733-2d74-4165-b209-f5d9ea430e96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56478f7ccd-qkwt8" podUID="749cb733-2d74-4165-b209-f5d9ea430e96" Nov 1 00:54:08.011625 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf-shm.mount: Deactivated successfully. Nov 1 00:54:08.623133 kubelet[2095]: I1101 00:54:08.623066 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:08.624023 env[1305]: time="2025-11-01T00:54:08.623975311Z" level=info msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" Nov 1 00:54:08.626809 kubelet[2095]: I1101 00:54:08.626733 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:08.627498 env[1305]: time="2025-11-01T00:54:08.627459284Z" level=info msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" Nov 1 00:54:08.629265 kubelet[2095]: I1101 00:54:08.629237 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:08.631188 kubelet[2095]: I1101 00:54:08.630706 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:08.631399 env[1305]: time="2025-11-01T00:54:08.631362853Z" level=info msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" Nov 1 00:54:08.635663 env[1305]: time="2025-11-01T00:54:08.635625390Z" level=info msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" Nov 1 00:54:08.639389 kubelet[2095]: I1101 00:54:08.639364 2095 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:08.641291 env[1305]: time="2025-11-01T00:54:08.641260414Z" level=info msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" Nov 1 00:54:08.722292 env[1305]: time="2025-11-01T00:54:08.722232190Z" level=error msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" failed" error="failed to destroy network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:08.722795 kubelet[2095]: E1101 00:54:08.722728 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:08.722899 kubelet[2095]: E1101 00:54:08.722810 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6"} Nov 1 00:54:08.722899 kubelet[2095]: E1101 00:54:08.722863 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:08.722899 kubelet[2095]: E1101 00:54:08.722886 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acf47117-3eb1-4aa3-89a4-bc9fecdad703\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:08.738335 env[1305]: time="2025-11-01T00:54:08.738280742Z" level=error msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" failed" error="failed to destroy network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:08.738874 kubelet[2095]: E1101 00:54:08.738812 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:08.738992 kubelet[2095]: E1101 00:54:08.738884 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773"} Nov 1 00:54:08.738992 kubelet[2095]: E1101 00:54:08.738932 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"749cb733-2d74-4165-b209-f5d9ea430e96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:08.738992 kubelet[2095]: E1101 00:54:08.738955 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"749cb733-2d74-4165-b209-f5d9ea430e96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-56478f7ccd-qkwt8" podUID="749cb733-2d74-4165-b209-f5d9ea430e96" Nov 1 00:54:08.739169 env[1305]: time="2025-11-01T00:54:08.739100545Z" level=error msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" failed" error="failed to destroy network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:08.739291 kubelet[2095]: E1101 00:54:08.739259 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:08.739345 kubelet[2095]: E1101 00:54:08.739311 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9"} Nov 1 00:54:08.739392 kubelet[2095]: E1101 00:54:08.739333 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b08705e4-7a04-4c33-a8c8-a3f67298574d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:08.739392 kubelet[2095]: E1101 00:54:08.739374 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b08705e4-7a04-4c33-a8c8-a3f67298574d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:08.744134 env[1305]: time="2025-11-01T00:54:08.744094282Z" level=error msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" failed" error="failed to destroy network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:08.744475 kubelet[2095]: E1101 00:54:08.744441 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:08.744580 kubelet[2095]: E1101 00:54:08.744481 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b"} Nov 1 00:54:08.744580 kubelet[2095]: E1101 00:54:08.744531 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:08.744580 kubelet[2095]: E1101 00:54:08.744551 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:08.750877 env[1305]: time="2025-11-01T00:54:08.750820840Z" level=error msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" failed" error="failed to destroy network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:54:08.751057 kubelet[2095]: E1101 00:54:08.751022 2095 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:08.751109 kubelet[2095]: E1101 00:54:08.751057 2095 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804"} Nov 1 00:54:08.751109 kubelet[2095]: E1101 00:54:08.751097 2095 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"447c37d4-c1de-4035-a57b-b729047ea7fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:54:08.751209 kubelet[2095]: E1101 00:54:08.751116 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"447c37d4-c1de-4035-a57b-b729047ea7fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:12.830670 kubelet[2095]: I1101 00:54:12.830567 2095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:54:12.838962 kubelet[2095]: E1101 00:54:12.838919 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:12.951000 audit[3213]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:12.974148 kernel: kauditd_printk_skb: 8 callbacks suppressed Nov 1 00:54:12.978682 kernel: audit: type=1325 audit(1761958452.951:319): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:12.978766 kernel: audit: type=1300 audit(1761958452.951:319): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5e5fe650 a2=0 a3=7ffc5e5fe63c items=0 ppid=2198 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:12.978798 kernel: audit: type=1327 audit(1761958452.951:319): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:12.978821 kernel: audit: type=1325 audit(1761958452.968:320): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:12.978845 kernel: audit: type=1300 audit(1761958452.968:320): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc5e5fe650 a2=0 a3=7ffc5e5fe63c items=0 ppid=2198 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:12.951000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc5e5fe650 a2=0 a3=7ffc5e5fe63c items=0 ppid=2198 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:12.992239 kernel: audit: type=1327 audit(1761958452.968:320): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:12.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:12.968000 audit[3213]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:12.968000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc5e5fe650 a2=0 a3=7ffc5e5fe63c items=0 ppid=2198 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:12.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:13.651780 kubelet[2095]: E1101 00:54:13.651668 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:16.188764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount952577452.mount: Deactivated successfully. Nov 1 00:54:16.215918 env[1305]: time="2025-11-01T00:54:16.215856204Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:16.217816 env[1305]: time="2025-11-01T00:54:16.217783548Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:16.219509 env[1305]: time="2025-11-01T00:54:16.219483302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:16.221033 env[1305]: time="2025-11-01T00:54:16.221006536Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:54:16.221600 env[1305]: time="2025-11-01T00:54:16.221574463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:54:16.257037 env[1305]: time="2025-11-01T00:54:16.256985668Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:54:16.272461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822356377.mount: Deactivated successfully. Nov 1 00:54:16.276985 env[1305]: time="2025-11-01T00:54:16.276923481Z" level=info msg="CreateContainer within sandbox \"53147f3d6f6e5b83b95ee12f1d97c8eb679b9856974c13cf626efc26b79b2580\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"eabb405608f3c39fa999d4d9f5c729f9b8fd113580e722669da852859a2bed21\"" Nov 1 00:54:16.279122 env[1305]: time="2025-11-01T00:54:16.278949589Z" level=info msg="StartContainer for \"eabb405608f3c39fa999d4d9f5c729f9b8fd113580e722669da852859a2bed21\"" Nov 1 00:54:16.347214 env[1305]: time="2025-11-01T00:54:16.347176033Z" level=info msg="StartContainer for \"eabb405608f3c39fa999d4d9f5c729f9b8fd113580e722669da852859a2bed21\" returns successfully" Nov 1 00:54:16.533816 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:54:16.533994 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:54:16.699825 kubelet[2095]: E1101 00:54:16.699741 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:16.780548 kubelet[2095]: I1101 00:54:16.777176 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zfgvj" podStartSLOduration=1.896708796 podStartE2EDuration="19.774079987s" podCreationTimestamp="2025-11-01 00:53:57 +0000 UTC" firstStartedPulling="2025-11-01 00:53:58.34559546 +0000 UTC m=+22.103691056" lastFinishedPulling="2025-11-01 00:54:16.222966665 +0000 UTC m=+39.981062247" observedRunningTime="2025-11-01 00:54:16.751827931 +0000 UTC m=+40.509923559" watchObservedRunningTime="2025-11-01 00:54:16.774079987 +0000 UTC m=+40.532175581" Nov 1 00:54:16.784361 env[1305]: time="2025-11-01T00:54:16.783581147Z" level=info msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.899 [INFO][3278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.900 [INFO][3278] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" iface="eth0" netns="/var/run/netns/cni-91083934-abe2-acaa-8715-904f7cafeac4" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.902 [INFO][3278] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" iface="eth0" netns="/var/run/netns/cni-91083934-abe2-acaa-8715-904f7cafeac4" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.903 [INFO][3278] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" iface="eth0" netns="/var/run/netns/cni-91083934-abe2-acaa-8715-904f7cafeac4" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.903 [INFO][3278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:16.903 [INFO][3278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.014 [INFO][3285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.016 [INFO][3285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.016 [INFO][3285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.028 [WARNING][3285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.028 [INFO][3285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.029 [INFO][3285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:17.034054 env[1305]: 2025-11-01 00:54:17.031 [INFO][3278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:17.034054 env[1305]: time="2025-11-01T00:54:17.033708932Z" level=info msg="TearDown network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" successfully" Nov 1 00:54:17.034054 env[1305]: time="2025-11-01T00:54:17.033772138Z" level=info msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" returns successfully" Nov 1 00:54:17.148437 kubelet[2095]: I1101 00:54:17.148403 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-backend-key-pair\") pod \"749cb733-2d74-4165-b209-f5d9ea430e96\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " Nov 1 00:54:17.148695 kubelet[2095]: I1101 00:54:17.148668 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-ca-bundle\") pod \"749cb733-2d74-4165-b209-f5d9ea430e96\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " Nov 1 00:54:17.148896 kubelet[2095]: I1101 00:54:17.148843 2095 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz8qg\" (UniqueName: \"kubernetes.io/projected/749cb733-2d74-4165-b209-f5d9ea430e96-kube-api-access-qz8qg\") pod \"749cb733-2d74-4165-b209-f5d9ea430e96\" (UID: \"749cb733-2d74-4165-b209-f5d9ea430e96\") " Nov 1 00:54:17.155293 kubelet[2095]: I1101 00:54:17.155238 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "749cb733-2d74-4165-b209-f5d9ea430e96" (UID: "749cb733-2d74-4165-b209-f5d9ea430e96"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:54:17.156982 kubelet[2095]: I1101 00:54:17.156946 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/749cb733-2d74-4165-b209-f5d9ea430e96-kube-api-access-qz8qg" (OuterVolumeSpecName: "kube-api-access-qz8qg") pod "749cb733-2d74-4165-b209-f5d9ea430e96" (UID: "749cb733-2d74-4165-b209-f5d9ea430e96"). InnerVolumeSpecName "kube-api-access-qz8qg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:54:17.159602 kubelet[2095]: I1101 00:54:17.159571 2095 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "749cb733-2d74-4165-b209-f5d9ea430e96" (UID: "749cb733-2d74-4165-b209-f5d9ea430e96"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:54:17.189613 systemd[1]: run-netns-cni\x2d91083934\x2dabe2\x2dacaa\x2d8715\x2d904f7cafeac4.mount: Deactivated successfully. Nov 1 00:54:17.190079 systemd[1]: var-lib-kubelet-pods-749cb733\x2d2d74\x2d4165\x2db209\x2df5d9ea430e96-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqz8qg.mount: Deactivated successfully. Nov 1 00:54:17.190496 systemd[1]: var-lib-kubelet-pods-749cb733\x2d2d74\x2d4165\x2db209\x2df5d9ea430e96-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:54:17.254102 kubelet[2095]: I1101 00:54:17.254041 2095 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-ca-bundle\") on node \"ci-3510.3.8-n-0efaf8214b\" DevicePath \"\"" Nov 1 00:54:17.254331 kubelet[2095]: I1101 00:54:17.254314 2095 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qz8qg\" (UniqueName: \"kubernetes.io/projected/749cb733-2d74-4165-b209-f5d9ea430e96-kube-api-access-qz8qg\") on node \"ci-3510.3.8-n-0efaf8214b\" DevicePath \"\"" Nov 1 00:54:17.254433 kubelet[2095]: I1101 00:54:17.254419 2095 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/749cb733-2d74-4165-b209-f5d9ea430e96-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-0efaf8214b\" DevicePath \"\"" Nov 1 00:54:17.706318 kubelet[2095]: I1101 00:54:17.706278 2095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:54:17.707263 kubelet[2095]: E1101 00:54:17.707238 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:17.959012 kubelet[2095]: I1101 00:54:17.958868 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bfl\" (UniqueName: \"kubernetes.io/projected/0234b74a-300a-4772-b752-16560b6b9a9c-kube-api-access-z5bfl\") pod \"whisker-6dfb57dc84-knf65\" (UID: \"0234b74a-300a-4772-b752-16560b6b9a9c\") " pod="calico-system/whisker-6dfb57dc84-knf65" Nov 1 00:54:17.959265 kubelet[2095]: I1101 00:54:17.959243 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0234b74a-300a-4772-b752-16560b6b9a9c-whisker-ca-bundle\") pod \"whisker-6dfb57dc84-knf65\" (UID: \"0234b74a-300a-4772-b752-16560b6b9a9c\") " pod="calico-system/whisker-6dfb57dc84-knf65" Nov 1 00:54:17.959383 kubelet[2095]: I1101 00:54:17.959366 2095 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0234b74a-300a-4772-b752-16560b6b9a9c-whisker-backend-key-pair\") pod \"whisker-6dfb57dc84-knf65\" (UID: \"0234b74a-300a-4772-b752-16560b6b9a9c\") " pod="calico-system/whisker-6dfb57dc84-knf65" Nov 1 00:54:18.086235 env[1305]: time="2025-11-01T00:54:18.086156215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dfb57dc84-knf65,Uid:0234b74a-300a-4772-b752-16560b6b9a9c,Namespace:calico-system,Attempt:0,}" Nov 1 00:54:18.228000 audit[3348]: AVC avc: denied { write } for pid=3348 comm="tee" name="fd" dev="proc" ino=24762 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.244394 kernel: audit: type=1400 audit(1761958458.228:321): avc: denied { write } for pid=3348 comm="tee" name="fd" dev="proc" ino=24762 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.244494 kernel: audit: type=1300 audit(1761958458.228:321): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecb5987b8 a2=241 a3=1b6 items=1 ppid=3316 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.228000 audit[3348]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecb5987b8 a2=241 a3=1b6 items=1 ppid=3316 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.228000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:54:18.259781 kernel: audit: type=1307 audit(1761958458.228:321): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:54:18.228000 audit: PATH item=0 name="/dev/fd/63" inode=24746 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.290279 kernel: audit: type=1302 audit(1761958458.228:321): item=0 name="/dev/fd/63" inode=24746 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.296797 kernel: audit: type=1327 audit(1761958458.228:321): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.229000 audit[3350]: AVC avc: denied { write } for pid=3350 comm="tee" name="fd" dev="proc" ino=24766 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.306800 kernel: audit: type=1400 audit(1761958458.229:322): avc: denied { write } for pid=3350 comm="tee" name="fd" dev="proc" ino=24766 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.229000 audit[3350]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe954777c9 a2=241 a3=1b6 items=1 ppid=3318 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.330792 kernel: audit: type=1300 audit(1761958458.229:322): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe954777c9 a2=241 a3=1b6 items=1 ppid=3318 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.229000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:54:18.229000 audit: PATH item=0 name="/dev/fd/63" inode=24747 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.344807 kernel: audit: type=1307 audit(1761958458.229:322): cwd="/etc/service/enabled/bird/log" Nov 1 00:54:18.344863 kernel: audit: type=1302 audit(1761958458.229:322): item=0 name="/dev/fd/63" inode=24747 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.351888 kernel: audit: type=1327 audit(1761958458.229:322): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.312000 audit[3372]: AVC avc: denied { write } for pid=3372 comm="tee" name="fd" dev="proc" ino=25622 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.312000 audit[3372]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8114b7ca a2=241 a3=1b6 items=1 ppid=3326 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.312000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:54:18.312000 audit: PATH item=0 name="/dev/fd/63" inode=24775 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.346000 audit[3389]: AVC avc: denied { write } for pid=3389 comm="tee" name="fd" dev="proc" ino=24818 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.346000 audit[3389]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe617187c8 a2=241 a3=1b6 items=1 ppid=3334 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.346000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:54:18.346000 audit: PATH item=0 name="/dev/fd/63" inode=24805 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.357000 audit[3394]: AVC avc: denied { write } for pid=3394 comm="tee" name="fd" dev="proc" ino=24824 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.357000 audit[3394]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedee3c7c8 a2=241 a3=1b6 items=1 ppid=3329 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.357000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:54:18.357000 audit: PATH item=0 name="/dev/fd/63" inode=25617 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.357000 audit[3392]: AVC avc: denied { write } for pid=3392 comm="tee" name="fd" dev="proc" ino=24826 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.357000 audit[3392]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5009f7b9 a2=241 a3=1b6 items=1 ppid=3320 pid=3392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.357000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:54:18.357000 audit: PATH item=0 name="/dev/fd/63" inode=25616 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.357000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.404000 audit[3402]: AVC avc: denied { write } for pid=3402 comm="tee" name="fd" dev="proc" ino=25631 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:54:18.404000 audit[3402]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0567e7c8 a2=241 a3=1b6 items=1 ppid=3324 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.404000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:54:18.404000 audit: PATH item=0 name="/dev/fd/63" inode=25626 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:54:18.404000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:54:18.468213 env[1305]: time="2025-11-01T00:54:18.468163701Z" level=info msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" Nov 1 00:54:18.471176 kubelet[2095]: I1101 00:54:18.471124 2095 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="749cb733-2d74-4165-b209-f5d9ea430e96" path="/var/lib/kubelet/pods/749cb733-2d74-4165-b209-f5d9ea430e96/volumes" Nov 1 00:54:18.543996 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:54:18.544120 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1d392c1f365: link becomes ready Nov 1 00:54:18.545186 systemd-networkd[1058]: cali1d392c1f365: Link UP Nov 1 00:54:18.545403 systemd-networkd[1058]: cali1d392c1f365: Gained carrier Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.328 [INFO][3378] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.356 [INFO][3378] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0 whisker-6dfb57dc84- calico-system 0234b74a-300a-4772-b752-16560b6b9a9c 912 0 2025-11-01 00:54:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6dfb57dc84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b whisker-6dfb57dc84-knf65 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1d392c1f365 [] [] }} ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.356 [INFO][3378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.420 [INFO][3404] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" HandleID="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.420 [INFO][3404] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" HandleID="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"whisker-6dfb57dc84-knf65", "timestamp":"2025-11-01 00:54:18.42017377 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.420 [INFO][3404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.420 [INFO][3404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.420 [INFO][3404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.435 [INFO][3404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.446 [INFO][3404] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.453 [INFO][3404] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.456 [INFO][3404] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.458 [INFO][3404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.458 [INFO][3404] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.460 [INFO][3404] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164 Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.464 [INFO][3404] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.474 [INFO][3404] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.129/26] block=192.168.55.128/26 handle="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.474 [INFO][3404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.129/26] handle="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.474 [INFO][3404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:18.576850 env[1305]: 2025-11-01 00:54:18.474 [INFO][3404] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.129/26] IPv6=[] ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" HandleID="k8s-pod-network.4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.489 [INFO][3378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0", GenerateName:"whisker-6dfb57dc84-", Namespace:"calico-system", SelfLink:"", UID:"0234b74a-300a-4772-b752-16560b6b9a9c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dfb57dc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"whisker-6dfb57dc84-knf65", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d392c1f365", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.489 [INFO][3378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.129/32] ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.489 [INFO][3378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d392c1f365 ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.543 [INFO][3378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.554 [INFO][3378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0", GenerateName:"whisker-6dfb57dc84-", Namespace:"calico-system", SelfLink:"", UID:"0234b74a-300a-4772-b752-16560b6b9a9c", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 54, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6dfb57dc84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164", Pod:"whisker-6dfb57dc84-knf65", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.55.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1d392c1f365", MAC:"9e:82:8a:6a:4b:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:18.577914 env[1305]: 2025-11-01 00:54:18.570 [INFO][3378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164" Namespace="calico-system" Pod="whisker-6dfb57dc84-knf65" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--6dfb57dc84--knf65-eth0" Nov 1 00:54:18.626156 env[1305]: time="2025-11-01T00:54:18.626034817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:18.626838 env[1305]: time="2025-11-01T00:54:18.626450646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:18.627366 env[1305]: time="2025-11-01T00:54:18.626997003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:18.629251 env[1305]: time="2025-11-01T00:54:18.629119436Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164 pid=3442 runtime=io.containerd.runc.v2 Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.623 [INFO][3422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.623 [INFO][3422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" iface="eth0" netns="/var/run/netns/cni-0365193d-39c6-a703-f677-72cdc71ee49f" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.623 [INFO][3422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" iface="eth0" netns="/var/run/netns/cni-0365193d-39c6-a703-f677-72cdc71ee49f" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.624 [INFO][3422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" iface="eth0" netns="/var/run/netns/cni-0365193d-39c6-a703-f677-72cdc71ee49f" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.624 [INFO][3422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.624 [INFO][3422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.759 [INFO][3454] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.760 [INFO][3454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.760 [INFO][3454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.770 [WARNING][3454] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.770 [INFO][3454] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.775 [INFO][3454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:18.780800 env[1305]: 2025-11-01 00:54:18.777 [INFO][3422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:18.784234 systemd[1]: run-netns-cni\x2d0365193d\x2d39c6\x2da703\x2df677\x2d72cdc71ee49f.mount: Deactivated successfully. Nov 1 00:54:18.789129 env[1305]: time="2025-11-01T00:54:18.789093743Z" level=info msg="TearDown network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" successfully" Nov 1 00:54:18.789239 env[1305]: time="2025-11-01T00:54:18.789220543Z" level=info msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" returns successfully" Nov 1 00:54:18.789830 kubelet[2095]: E1101 00:54:18.789677 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:18.790429 env[1305]: time="2025-11-01T00:54:18.790401350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95hcw,Uid:04f4ba43-b773-4444-b355-28563af8171b,Namespace:kube-system,Attempt:1,}" Nov 1 00:54:18.853909 env[1305]: time="2025-11-01T00:54:18.845308478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dfb57dc84-knf65,Uid:0234b74a-300a-4772-b752-16560b6b9a9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"4130aa03f88dcc3070ea4c7809d18c5e05bc21e8aa851c842a9326ebe0a60164\"" Nov 1 00:54:18.862342 env[1305]: time="2025-11-01T00:54:18.862306453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.890000 audit: BPF prog-id=10 op=LOAD Nov 1 00:54:18.890000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdafb39f0 a2=98 a3=1fffffffffffffff items=0 ppid=3335 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.890000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:54:18.891000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit: BPF prog-id=11 op=LOAD Nov 1 00:54:18.891000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdafb38d0 a2=94 a3=3 items=0 ppid=3335 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:54:18.891000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { bpf } for pid=3525 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit: BPF prog-id=12 op=LOAD Nov 1 00:54:18.891000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdafb3910 a2=94 a3=7fffdafb3af0 items=0 ppid=3335 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:54:18.891000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:54:18.891000 audit[3525]: AVC avc: denied { perfmon } for pid=3525 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.891000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffdafb39e0 a2=50 a3=a000000085 items=0 ppid=3335 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.891000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.897000 audit: BPF prog-id=13 op=LOAD Nov 1 00:54:18.897000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdfacd6060 a2=98 a3=3 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.897000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:18.897000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.900000 audit: BPF prog-id=14 op=LOAD Nov 1 00:54:18.900000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdfacd5e50 a2=94 a3=54428f items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.900000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:18.901000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:18.901000 audit: BPF prog-id=15 op=LOAD Nov 1 00:54:18.901000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdfacd5e80 a2=94 a3=2 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:18.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:18.901000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:54:19.001167 systemd-networkd[1058]: califf827e12b10: Link UP Nov 1 00:54:19.005233 systemd-networkd[1058]: califf827e12b10: Gained carrier Nov 1 00:54:19.005823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf827e12b10: link becomes ready Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.928 [INFO][3498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0 coredns-668d6bf9bc- kube-system 04f4ba43-b773-4444-b355-28563af8171b 917 0 2025-11-01 00:53:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b coredns-668d6bf9bc-95hcw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf827e12b10 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.928 [INFO][3498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.956 [INFO][3531] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" HandleID="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.956 [INFO][3531] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" HandleID="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"coredns-668d6bf9bc-95hcw", "timestamp":"2025-11-01 00:54:18.956318386 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.956 [INFO][3531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.956 [INFO][3531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.956 [INFO][3531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.964 [INFO][3531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.969 [INFO][3531] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.974 [INFO][3531] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.976 [INFO][3531] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.981 [INFO][3531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.981 [INFO][3531] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.983 [INFO][3531] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5 Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.988 [INFO][3531] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.995 [INFO][3531] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.130/26] block=192.168.55.128/26 handle="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.995 [INFO][3531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.130/26] handle="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.995 [INFO][3531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:19.031829 env[1305]: 2025-11-01 00:54:18.995 [INFO][3531] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.130/26] IPv6=[] ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" HandleID="k8s-pod-network.a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:18.997 [INFO][3498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"04f4ba43-b773-4444-b355-28563af8171b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"coredns-668d6bf9bc-95hcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf827e12b10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:18.997 [INFO][3498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.130/32] ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:18.997 [INFO][3498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf827e12b10 ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:19.002 [INFO][3498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:19.011 [INFO][3498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"04f4ba43-b773-4444-b355-28563af8171b", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5", Pod:"coredns-668d6bf9bc-95hcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf827e12b10", MAC:"0a:d6:86:b2:13:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:19.032925 env[1305]: 2025-11-01 00:54:19.029 [INFO][3498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-95hcw" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:19.046837 env[1305]: time="2025-11-01T00:54:19.046738493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:19.047029 env[1305]: time="2025-11-01T00:54:19.047001574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:19.047186 env[1305]: time="2025-11-01T00:54:19.047156735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:19.047568 env[1305]: time="2025-11-01T00:54:19.047517972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5 pid=3551 runtime=io.containerd.runc.v2 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit: BPF prog-id=16 op=LOAD Nov 1 00:54:19.082000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffdfacd5d40 a2=94 a3=1 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.082000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.082000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:54:19.082000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.082000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffdfacd5e10 a2=50 a3=7ffdfacd5ef0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.082000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.095000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.095000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5d50 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.095000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.095000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdfacd5d80 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.095000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.096000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.096000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdfacd5c90 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.096000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.096000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5da0 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.096000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.096000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5d80 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.096000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.096000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5d70 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.096000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.096000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5da0 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.096000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.097000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.097000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdfacd5d80 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.097000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.097000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdfacd5da0 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.097000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.097000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdfacd5d70 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.097000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.097000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffdfacd5de0 a2=28 a3=0 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.097000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdfacd5b90 a2=50 a3=1 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit: BPF prog-id=17 op=LOAD Nov 1 00:54:19.098000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdfacd5b90 a2=94 a3=5 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.098000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:54:19.098000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.098000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffdfacd5c40 a2=50 a3=1 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.098000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffdfacd5d60 a2=4 a3=38 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.099000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.099000 audit[3526]: AVC avc: denied { confidentiality } for pid=3526 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.099000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdfacd5db0 a2=94 a3=6 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.099000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.100000 audit[3526]: AVC avc: denied { confidentiality } for pid=3526 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.100000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdfacd5560 a2=94 a3=88 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.100000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.101000 audit[3526]: AVC avc: denied { confidentiality } for pid=3526 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.101000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffdfacd5560 a2=94 a3=88 items=0 ppid=3335 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.101000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.129000 audit: BPF prog-id=18 op=LOAD Nov 1 00:54:19.129000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7f3a9610 a2=98 a3=1999999999999999 items=0 ppid=3335 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.129000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:54:19.130000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit: BPF prog-id=19 op=LOAD Nov 1 00:54:19.130000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7f3a94f0 a2=94 a3=ffff items=0 ppid=3335 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.130000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:54:19.130000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { perfmon } for pid=3583 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit[3583]: AVC avc: denied { bpf } for pid=3583 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.130000 audit: BPF prog-id=20 op=LOAD Nov 1 00:54:19.130000 audit[3583]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7f3a9530 a2=94 a3=7ffd7f3a9710 items=0 ppid=3335 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.130000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:54:19.130000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:54:19.140992 env[1305]: time="2025-11-01T00:54:19.140924744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-95hcw,Uid:04f4ba43-b773-4444-b355-28563af8171b,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5\"" Nov 1 00:54:19.141926 kubelet[2095]: E1101 00:54:19.141898 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:19.144696 env[1305]: time="2025-11-01T00:54:19.144651639Z" level=info msg="CreateContainer within sandbox \"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:54:19.175976 env[1305]: time="2025-11-01T00:54:19.170209757Z" level=info msg="CreateContainer within sandbox \"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"594a8e676a2f292de321e89b3fd35abd453a249017f3522b8d8be0383bccfe0c\"" Nov 1 00:54:19.175976 env[1305]: time="2025-11-01T00:54:19.171078105Z" level=info msg="StartContainer for \"594a8e676a2f292de321e89b3fd35abd453a249017f3522b8d8be0383bccfe0c\"" Nov 1 00:54:19.175976 env[1305]: time="2025-11-01T00:54:19.174831719Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:19.175976 env[1305]: time="2025-11-01T00:54:19.175891520Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:54:19.177213 kubelet[2095]: E1101 00:54:19.176112 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:19.177336 kubelet[2095]: E1101 00:54:19.177211 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:19.182334 kubelet[2095]: E1101 00:54:19.182266 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:124ea995052c4baba627fef25423b142,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:19.194693 env[1305]: time="2025-11-01T00:54:19.189318478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:54:19.298840 systemd-networkd[1058]: vxlan.calico: Link UP Nov 1 00:54:19.298851 systemd-networkd[1058]: vxlan.calico: Gained carrier Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.305000 audit: BPF prog-id=21 op=LOAD Nov 1 00:54:19.305000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ecdc70 a2=98 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.305000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit: BPF prog-id=22 op=LOAD Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ecda80 a2=94 a3=54428f items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit: BPF prog-id=23 op=LOAD Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe34ecdab0 a2=94 a3=2 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecd980 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34ecd9b0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34ecd8c0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecd9d0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecd9b0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecd9a0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecd9d0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34ecd9b0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34ecd9d0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe34ecd9a0 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.308000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.308000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe34ecda10 a2=28 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.308000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit: BPF prog-id=24 op=LOAD Nov 1 00:54:19.309000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe34ecd880 a2=94 a3=0 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.309000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.309000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe34ecd870 a2=50 a3=2800 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.309000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe34ecd870 a2=50 a3=2800 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.309000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit: BPF prog-id=25 op=LOAD Nov 1 00:54:19.309000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe34ecd090 a2=94 a3=2 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.309000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.309000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { perfmon } for pid=3644 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit[3644]: AVC avc: denied { bpf } for pid=3644 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.309000 audit: BPF prog-id=26 op=LOAD Nov 1 00:54:19.309000 audit[3644]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe34ecd190 a2=94 a3=30 items=0 ppid=3335 pid=3644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.309000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit: BPF prog-id=27 op=LOAD Nov 1 00:54:19.315000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffece92c9e0 a2=98 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.315000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit: BPF prog-id=28 op=LOAD Nov 1 00:54:19.315000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffece92c7d0 a2=94 a3=54428f items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.315000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.315000 audit: BPF prog-id=29 op=LOAD Nov 1 00:54:19.315000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffece92c800 a2=94 a3=2 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.315000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.315000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:54:19.357369 env[1305]: time="2025-11-01T00:54:19.339810040Z" level=info msg="StartContainer for \"594a8e676a2f292de321e89b3fd35abd453a249017f3522b8d8be0383bccfe0c\" returns successfully" Nov 1 00:54:19.466589 env[1305]: time="2025-11-01T00:54:19.466546323Z" level=info msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" Nov 1 00:54:19.496461 env[1305]: time="2025-11-01T00:54:19.496380853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit: BPF prog-id=30 op=LOAD Nov 1 00:54:19.496000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffece92c6c0 a2=94 a3=1 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.496000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:54:19.496000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.496000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffece92c790 a2=50 a3=7ffece92c870 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.496000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.499070 env[1305]: time="2025-11-01T00:54:19.499003343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:54:19.499479 kubelet[2095]: E1101 00:54:19.499413 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:19.499479 kubelet[2095]: E1101 00:54:19.499468 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:19.500128 kubelet[2095]: E1101 00:54:19.499589 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:19.502210 kubelet[2095]: E1101 00:54:19.502120 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c6d0 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffece92c700 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffece92c610 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c720 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c700 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c6f0 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c720 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffece92c700 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffece92c720 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffece92c6f0 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.512000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.512000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffece92c760 a2=28 a3=0 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.512000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffece92c510 a2=50 a3=1 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit: BPF prog-id=31 op=LOAD Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffece92c510 a2=94 a3=5 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffece92c5c0 a2=50 a3=1 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffece92c6e0 a2=4 a3=38 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffece92c730 a2=94 a3=6 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffece92bee0 a2=94 a3=88 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.513000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:54:19.513000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffece92bee0 a2=94 a3=88 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.513000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.514000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.514000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffece92d910 a2=10 a3=f8f00800 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.514000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.514000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.514000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffece92d7b0 a2=10 a3=3 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.514000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.514000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.514000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffece92d750 a2=10 a3=3 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.514000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.514000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:54:19.514000 audit[3651]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffece92d750 a2=10 a3=7 items=0 ppid=3335 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.514000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:54:19.518000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:54:19.584994 systemd-networkd[1058]: cali1d392c1f365: Gained IPv6LL Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.562 [INFO][3666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.563 [INFO][3666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" iface="eth0" netns="/var/run/netns/cni-b9f7cbb3-c1c5-1aab-dcde-7df7aff2bdf9" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.563 [INFO][3666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" iface="eth0" netns="/var/run/netns/cni-b9f7cbb3-c1c5-1aab-dcde-7df7aff2bdf9" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.563 [INFO][3666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" iface="eth0" netns="/var/run/netns/cni-b9f7cbb3-c1c5-1aab-dcde-7df7aff2bdf9" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.563 [INFO][3666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.563 [INFO][3666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.589 [INFO][3685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.589 [INFO][3685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.589 [INFO][3685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.600 [WARNING][3685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.601 [INFO][3685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.604 [INFO][3685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:19.609005 env[1305]: 2025-11-01 00:54:19.607 [INFO][3666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:19.613703 systemd[1]: run-netns-cni\x2db9f7cbb3\x2dc1c5\x2d1aab\x2ddcde\x2d7df7aff2bdf9.mount: Deactivated successfully. Nov 1 00:54:19.615292 env[1305]: time="2025-11-01T00:54:19.615225441Z" level=info msg="TearDown network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" successfully" Nov 1 00:54:19.615428 env[1305]: time="2025-11-01T00:54:19.615406041Z" level=info msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" returns successfully" Nov 1 00:54:19.619022 kubelet[2095]: E1101 00:54:19.615973 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:19.622220 env[1305]: time="2025-11-01T00:54:19.622169403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbw54,Uid:91b31c91-0235-44c1-8490-69cf1d3604f2,Namespace:kube-system,Attempt:1,}" Nov 1 00:54:19.638000 audit[3704]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:19.638000 audit[3704]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffcdcf0ac0 a2=0 a3=7fffcdcf0aac items=0 ppid=3335 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.638000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:19.660000 audit[3703]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:19.660000 audit[3703]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffec3fd5660 a2=0 a3=7ffec3fd564c items=0 ppid=3335 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.660000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:19.672000 audit[3702]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:19.672000 audit[3702]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc6c7cf110 a2=0 a3=7ffc6c7cf0fc items=0 ppid=3335 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.672000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:19.673000 audit[3705]: NETFILTER_CFG table=filter:108 family=2 entries=128 op=nft_register_chain pid=3705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:19.673000 audit[3705]: SYSCALL arch=c000003e syscall=46 success=yes exit=72768 a0=3 a1=7ffc90dbf050 a2=0 a3=560ec7b4b000 items=0 ppid=3335 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.673000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:19.729506 kubelet[2095]: E1101 00:54:19.729369 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:19.751206 kubelet[2095]: E1101 00:54:19.751151 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:19.770420 kubelet[2095]: I1101 00:54:19.770357 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-95hcw" podStartSLOduration=37.770339236 podStartE2EDuration="37.770339236s" podCreationTimestamp="2025-11-01 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:54:19.746708286 +0000 UTC m=+43.504803892" watchObservedRunningTime="2025-11-01 00:54:19.770339236 +0000 UTC m=+43.528434839" Nov 1 00:54:19.825000 audit[3737]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:19.825000 audit[3737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc2f82ba70 a2=0 a3=7ffc2f82ba5c items=0 ppid=2198 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.825000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:19.831000 audit[3737]: NETFILTER_CFG table=nat:110 family=2 entries=35 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:19.831000 audit[3737]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc2f82ba70 a2=0 a3=7ffc2f82ba5c items=0 ppid=2198 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:19.863027 systemd-networkd[1058]: calibad39d8f5cf: Link UP Nov 1 00:54:19.868595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibad39d8f5cf: link becomes ready Nov 1 00:54:19.866186 systemd-networkd[1058]: calibad39d8f5cf: Gained carrier Nov 1 00:54:19.869000 audit[3741]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:19.869000 audit[3741]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa3977f20 a2=0 a3=7fffa3977f0c items=0 ppid=2198 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:19.877000 audit[3741]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3741 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:19.877000 audit[3741]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffa3977f20 a2=0 a3=7fffa3977f0c items=0 ppid=2198 pid=3741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.716 [INFO][3713] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0 coredns-668d6bf9bc- kube-system 91b31c91-0235-44c1-8490-69cf1d3604f2 935 0 2025-11-01 00:53:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b coredns-668d6bf9bc-hbw54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibad39d8f5cf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.716 [INFO][3713] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.808 [INFO][3730] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" HandleID="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.808 [INFO][3730] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" HandleID="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a37c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"coredns-668d6bf9bc-hbw54", "timestamp":"2025-11-01 00:54:19.808383503 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.808 [INFO][3730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.808 [INFO][3730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.808 [INFO][3730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.820 [INFO][3730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.826 [INFO][3730] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.831 [INFO][3730] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.833 [INFO][3730] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.835 [INFO][3730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.835 [INFO][3730] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.837 [INFO][3730] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.842 [INFO][3730] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.849 [INFO][3730] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.131/26] block=192.168.55.128/26 handle="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.849 [INFO][3730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.131/26] handle="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.849 [INFO][3730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:19.883880 env[1305]: 2025-11-01 00:54:19.849 [INFO][3730] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.131/26] IPv6=[] ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" HandleID="k8s-pod-network.9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.852 [INFO][3713] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91b31c91-0235-44c1-8490-69cf1d3604f2", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"coredns-668d6bf9bc-hbw54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad39d8f5cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.852 [INFO][3713] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.131/32] ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.852 [INFO][3713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibad39d8f5cf ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.866 [INFO][3713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.867 [INFO][3713] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91b31c91-0235-44c1-8490-69cf1d3604f2", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b", Pod:"coredns-668d6bf9bc-hbw54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad39d8f5cf", MAC:"8e:09:6b:80:13:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:19.884642 env[1305]: 2025-11-01 00:54:19.881 [INFO][3713] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbw54" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:19.897000 audit[3758]: NETFILTER_CFG table=filter:113 family=2 entries=36 op=nft_register_chain pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:19.897000 audit[3758]: SYSCALL arch=c000003e syscall=46 success=yes exit=19156 a0=3 a1=7fff6bfb1ab0 a2=0 a3=7fff6bfb1a9c items=0 ppid=3335 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:19.897000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:19.900228 env[1305]: time="2025-11-01T00:54:19.900153614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:19.900414 env[1305]: time="2025-11-01T00:54:19.900386525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:19.900529 env[1305]: time="2025-11-01T00:54:19.900504543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:19.900959 env[1305]: time="2025-11-01T00:54:19.900916746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b pid=3756 runtime=io.containerd.runc.v2 Nov 1 00:54:19.971990 env[1305]: time="2025-11-01T00:54:19.971937141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbw54,Uid:91b31c91-0235-44c1-8490-69cf1d3604f2,Namespace:kube-system,Attempt:1,} returns sandbox id \"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b\"" Nov 1 00:54:19.973067 kubelet[2095]: E1101 00:54:19.973032 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:19.980789 env[1305]: time="2025-11-01T00:54:19.980557479Z" level=info msg="CreateContainer within sandbox \"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:54:19.995113 env[1305]: time="2025-11-01T00:54:19.995067862Z" level=info msg="CreateContainer within sandbox \"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f26d9a37527b4ae974aac745910113ff4442b95e85297adf873f4df323a9d3cb\"" Nov 1 00:54:19.996569 env[1305]: time="2025-11-01T00:54:19.996536624Z" level=info msg="StartContainer for \"f26d9a37527b4ae974aac745910113ff4442b95e85297adf873f4df323a9d3cb\"" Nov 1 00:54:20.060953 env[1305]: time="2025-11-01T00:54:20.058186984Z" level=info msg="StartContainer for \"f26d9a37527b4ae974aac745910113ff4442b95e85297adf873f4df323a9d3cb\" returns successfully" Nov 1 00:54:20.468944 env[1305]: time="2025-11-01T00:54:20.468231199Z" level=info msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" Nov 1 00:54:20.470808 env[1305]: time="2025-11-01T00:54:20.470485762Z" level=info msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.530 [INFO][3856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.531 [INFO][3856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" iface="eth0" netns="/var/run/netns/cni-95f485ad-d59c-e1f1-3db7-24c24ba5a9f6" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.531 [INFO][3856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" iface="eth0" netns="/var/run/netns/cni-95f485ad-d59c-e1f1-3db7-24c24ba5a9f6" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.531 [INFO][3856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" iface="eth0" netns="/var/run/netns/cni-95f485ad-d59c-e1f1-3db7-24c24ba5a9f6" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.531 [INFO][3856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.531 [INFO][3856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.570 [INFO][3869] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.570 [INFO][3869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.570 [INFO][3869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.577 [WARNING][3869] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.577 [INFO][3869] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.579 [INFO][3869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:20.587563 env[1305]: 2025-11-01 00:54:20.585 [INFO][3856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:20.591127 systemd[1]: run-netns-cni\x2d95f485ad\x2dd59c\x2de1f1\x2d3db7\x2d24c24ba5a9f6.mount: Deactivated successfully. Nov 1 00:54:20.591602 env[1305]: time="2025-11-01T00:54:20.591562001Z" level=info msg="TearDown network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" successfully" Nov 1 00:54:20.591701 env[1305]: time="2025-11-01T00:54:20.591680945Z" level=info msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" returns successfully" Nov 1 00:54:20.593210 env[1305]: time="2025-11-01T00:54:20.593181251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b568d67d-z4c8c,Uid:0979e255-e4e9-4664-a95e-5354a9f7d531,Namespace:calico-system,Attempt:1,}" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.559 [INFO][3855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.559 [INFO][3855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" iface="eth0" netns="/var/run/netns/cni-ad9aab57-f898-e8de-9041-246c08024ad9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.559 [INFO][3855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" iface="eth0" netns="/var/run/netns/cni-ad9aab57-f898-e8de-9041-246c08024ad9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.560 [INFO][3855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" iface="eth0" netns="/var/run/netns/cni-ad9aab57-f898-e8de-9041-246c08024ad9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.560 [INFO][3855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.560 [INFO][3855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.616 [INFO][3874] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.617 [INFO][3874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.617 [INFO][3874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.624 [WARNING][3874] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.624 [INFO][3874] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.626 [INFO][3874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:20.630608 env[1305]: 2025-11-01 00:54:20.628 [INFO][3855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:20.633847 systemd[1]: run-netns-cni\x2dad9aab57\x2df898\x2de8de\x2d9041\x2d246c08024ad9.mount: Deactivated successfully. Nov 1 00:54:20.635387 env[1305]: time="2025-11-01T00:54:20.635335508Z" level=info msg="TearDown network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" successfully" Nov 1 00:54:20.635387 env[1305]: time="2025-11-01T00:54:20.635386159Z" level=info msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" returns successfully" Nov 1 00:54:20.636313 env[1305]: time="2025-11-01T00:54:20.636281819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twt7m,Uid:b08705e4-7a04-4c33-a8c8-a3f67298574d,Namespace:calico-system,Attempt:1,}" Nov 1 00:54:20.735281 kubelet[2095]: E1101 00:54:20.735171 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:20.737269 kubelet[2095]: E1101 00:54:20.736745 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:20.749658 kubelet[2095]: E1101 00:54:20.739657 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:20.770792 kubelet[2095]: I1101 00:54:20.762622 2095 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hbw54" podStartSLOduration=38.76249527 podStartE2EDuration="38.76249527s" podCreationTimestamp="2025-11-01 00:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:54:20.751070874 +0000 UTC m=+44.509166468" watchObservedRunningTime="2025-11-01 00:54:20.76249527 +0000 UTC m=+44.520590874" Nov 1 00:54:20.813000 audit[3922]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=3922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:20.813000 audit[3922]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc47a25e10 a2=0 a3=7ffc47a25dfc items=0 ppid=2198 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:20.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:20.818448 systemd-networkd[1058]: cali243fb163716: Link UP Nov 1 00:54:20.824065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:54:20.824145 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali243fb163716: link becomes ready Nov 1 00:54:20.824000 audit[3922]: NETFILTER_CFG table=nat:115 family=2 entries=44 op=nft_register_rule pid=3922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:20.824000 audit[3922]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc47a25e10 a2=0 a3=7ffc47a25dfc items=0 ppid=2198 pid=3922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:20.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:20.824389 systemd-networkd[1058]: cali243fb163716: Gained carrier Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.682 [INFO][3881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0 calico-kube-controllers-85b568d67d- calico-system 0979e255-e4e9-4664-a95e-5354a9f7d531 963 0 2025-11-01 00:53:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85b568d67d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b calico-kube-controllers-85b568d67d-z4c8c eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali243fb163716 [] [] }} ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.683 [INFO][3881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.723 [INFO][3905] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" HandleID="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.723 [INFO][3905] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" HandleID="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"calico-kube-controllers-85b568d67d-z4c8c", "timestamp":"2025-11-01 00:54:20.723182866 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.723 [INFO][3905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.723 [INFO][3905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.723 [INFO][3905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.732 [INFO][3905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.746 [INFO][3905] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.761 [INFO][3905] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.776 [INFO][3905] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.786 [INFO][3905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.786 [INFO][3905] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.791 [INFO][3905] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.799 [INFO][3905] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.810 [INFO][3905] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.132/26] block=192.168.55.128/26 handle="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.811 [INFO][3905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.132/26] handle="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.811 [INFO][3905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:20.849643 env[1305]: 2025-11-01 00:54:20.811 [INFO][3905] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.132/26] IPv6=[] ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" HandleID="k8s-pod-network.9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.813 [INFO][3881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0", GenerateName:"calico-kube-controllers-85b568d67d-", Namespace:"calico-system", SelfLink:"", UID:"0979e255-e4e9-4664-a95e-5354a9f7d531", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b568d67d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"calico-kube-controllers-85b568d67d-z4c8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali243fb163716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.813 [INFO][3881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.132/32] ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.813 [INFO][3881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali243fb163716 ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.825 [INFO][3881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.825 [INFO][3881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0", GenerateName:"calico-kube-controllers-85b568d67d-", Namespace:"calico-system", SelfLink:"", UID:"0979e255-e4e9-4664-a95e-5354a9f7d531", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b568d67d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c", Pod:"calico-kube-controllers-85b568d67d-z4c8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali243fb163716", MAC:"56:08:00:d6:5c:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:20.850820 env[1305]: 2025-11-01 00:54:20.841 [INFO][3881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c" Namespace="calico-system" Pod="calico-kube-controllers-85b568d67d-z4c8c" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:20.868000 audit[3931]: NETFILTER_CFG table=filter:116 family=2 entries=44 op=nft_register_chain pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:20.868000 audit[3931]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffc71e82b50 a2=0 a3=7ffc71e82b3c items=0 ppid=3335 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:20.868000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:20.889929 env[1305]: time="2025-11-01T00:54:20.883692310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:20.889929 env[1305]: time="2025-11-01T00:54:20.883742471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:20.889929 env[1305]: time="2025-11-01T00:54:20.883770981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:20.889929 env[1305]: time="2025-11-01T00:54:20.883899021Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c pid=3939 runtime=io.containerd.runc.v2 Nov 1 00:54:20.891127 systemd-networkd[1058]: cali56a43b874bd: Link UP Nov 1 00:54:20.895863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali56a43b874bd: link becomes ready Nov 1 00:54:20.895890 systemd-networkd[1058]: cali56a43b874bd: Gained carrier Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.718 [INFO][3893] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0 csi-node-driver- calico-system b08705e4-7a04-4c33-a8c8-a3f67298574d 964 0 2025-11-01 00:53:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b csi-node-driver-twt7m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali56a43b874bd [] [] }} ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.719 [INFO][3893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.807 [INFO][3914] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" HandleID="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.831 [INFO][3914] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" HandleID="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003357d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"csi-node-driver-twt7m", "timestamp":"2025-11-01 00:54:20.807976268 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.831 [INFO][3914] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.831 [INFO][3914] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.831 [INFO][3914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.850 [INFO][3914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.855 [INFO][3914] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.860 [INFO][3914] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.862 [INFO][3914] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.865 [INFO][3914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.865 [INFO][3914] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.866 [INFO][3914] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012 Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.871 [INFO][3914] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.877 [INFO][3914] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.133/26] block=192.168.55.128/26 handle="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.877 [INFO][3914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.133/26] handle="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.877 [INFO][3914] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:20.919347 env[1305]: 2025-11-01 00:54:20.877 [INFO][3914] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.133/26] IPv6=[] ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" HandleID="k8s-pod-network.2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.880 [INFO][3893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b08705e4-7a04-4c33-a8c8-a3f67298574d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"csi-node-driver-twt7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56a43b874bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.880 [INFO][3893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.133/32] ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.880 [INFO][3893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56a43b874bd ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.896 [INFO][3893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.897 [INFO][3893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b08705e4-7a04-4c33-a8c8-a3f67298574d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012", Pod:"csi-node-driver-twt7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56a43b874bd", MAC:"aa:91:a9:2e:f9:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:20.920900 env[1305]: 2025-11-01 00:54:20.914 [INFO][3893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012" Namespace="calico-system" Pod="csi-node-driver-twt7m" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:20.935689 systemd-networkd[1058]: califf827e12b10: Gained IPv6LL Nov 1 00:54:20.958803 env[1305]: time="2025-11-01T00:54:20.947651148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:20.958803 env[1305]: time="2025-11-01T00:54:20.947724125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:20.958803 env[1305]: time="2025-11-01T00:54:20.947746377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:20.958803 env[1305]: time="2025-11-01T00:54:20.947911404Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012 pid=3975 runtime=io.containerd.runc.v2 Nov 1 00:54:20.936031 systemd-networkd[1058]: vxlan.calico: Gained IPv6LL Nov 1 00:54:20.965000 audit[3989]: NETFILTER_CFG table=filter:117 family=2 entries=48 op=nft_register_chain pid=3989 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:20.965000 audit[3989]: SYSCALL arch=c000003e syscall=46 success=yes exit=23140 a0=3 a1=7ffc4e823580 a2=0 a3=7ffc4e82356c items=0 ppid=3335 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:20.965000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:21.015170 env[1305]: time="2025-11-01T00:54:21.015090276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-twt7m,Uid:b08705e4-7a04-4c33-a8c8-a3f67298574d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012\"" Nov 1 00:54:21.019322 env[1305]: time="2025-11-01T00:54:21.019282323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:54:21.049522 env[1305]: time="2025-11-01T00:54:21.049463729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85b568d67d-z4c8c,Uid:0979e255-e4e9-4664-a95e-5354a9f7d531,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c\"" Nov 1 00:54:21.184950 systemd-networkd[1058]: calibad39d8f5cf: Gained IPv6LL Nov 1 00:54:21.318887 env[1305]: time="2025-11-01T00:54:21.318537687Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:21.319490 env[1305]: time="2025-11-01T00:54:21.319397876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:54:21.319739 kubelet[2095]: E1101 00:54:21.319691 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:54:21.320076 kubelet[2095]: E1101 00:54:21.319746 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:54:21.320076 kubelet[2095]: E1101 00:54:21.320012 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:21.321134 env[1305]: time="2025-11-01T00:54:21.321094345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:54:21.467482 env[1305]: time="2025-11-01T00:54:21.467091704Z" level=info msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" Nov 1 00:54:21.468397 env[1305]: time="2025-11-01T00:54:21.468361395Z" level=info msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.539 [INFO][4047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.542 [INFO][4047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" iface="eth0" netns="/var/run/netns/cni-fb847030-aa8f-17dd-8d30-f5b5b8bca75b" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.542 [INFO][4047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" iface="eth0" netns="/var/run/netns/cni-fb847030-aa8f-17dd-8d30-f5b5b8bca75b" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.543 [INFO][4047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" iface="eth0" netns="/var/run/netns/cni-fb847030-aa8f-17dd-8d30-f5b5b8bca75b" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.543 [INFO][4047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.543 [INFO][4047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.587 [INFO][4061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.587 [INFO][4061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.587 [INFO][4061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.597 [WARNING][4061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.597 [INFO][4061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.599 [INFO][4061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:21.603721 env[1305]: 2025-11-01 00:54:21.601 [INFO][4047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:21.613610 env[1305]: time="2025-11-01T00:54:21.609105148Z" level=info msg="TearDown network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" successfully" Nov 1 00:54:21.613610 env[1305]: time="2025-11-01T00:54:21.609151370Z" level=info msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" returns successfully" Nov 1 00:54:21.612151 systemd[1]: run-netns-cni\x2dfb847030\x2daa8f\x2d17dd\x2d8d30\x2df5b5b8bca75b.mount: Deactivated successfully. Nov 1 00:54:21.615196 env[1305]: time="2025-11-01T00:54:21.615143332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j9dnh,Uid:acf47117-3eb1-4aa3-89a4-bc9fecdad703,Namespace:calico-system,Attempt:1,}" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.542 [INFO][4046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.543 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" iface="eth0" netns="/var/run/netns/cni-bae2b04f-32e9-b6ff-ec1a-d1c7e9992c55" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.543 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" iface="eth0" netns="/var/run/netns/cni-bae2b04f-32e9-b6ff-ec1a-d1c7e9992c55" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.543 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" iface="eth0" netns="/var/run/netns/cni-bae2b04f-32e9-b6ff-ec1a-d1c7e9992c55" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.543 [INFO][4046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.543 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.602 [INFO][4060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.603 [INFO][4060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.603 [INFO][4060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.613 [WARNING][4060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.614 [INFO][4060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.616 [INFO][4060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:21.622505 env[1305]: 2025-11-01 00:54:21.618 [INFO][4046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:21.627439 env[1305]: time="2025-11-01T00:54:21.625693327Z" level=info msg="TearDown network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" successfully" Nov 1 00:54:21.627439 env[1305]: time="2025-11-01T00:54:21.625742885Z" level=info msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" returns successfully" Nov 1 00:54:21.626264 systemd[1]: run-netns-cni\x2dbae2b04f\x2d32e9\x2db6ff\x2dec1a\x2dd1c7e9992c55.mount: Deactivated successfully. Nov 1 00:54:21.628432 env[1305]: time="2025-11-01T00:54:21.628349278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-gzvhz,Uid:0aeb6ff7-2d7d-423c-8068-1607bda1ebe8,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:54:21.748889 kubelet[2095]: E1101 00:54:21.748474 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:21.765680 env[1305]: time="2025-11-01T00:54:21.765616242Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:21.768010 env[1305]: time="2025-11-01T00:54:21.767950309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:54:21.769581 kubelet[2095]: E1101 00:54:21.769500 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:54:21.769668 kubelet[2095]: E1101 00:54:21.769609 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:54:21.769994 kubelet[2095]: E1101 00:54:21.769914 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmmkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85b568d67d-z4c8c_calico-system(0979e255-e4e9-4664-a95e-5354a9f7d531): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:21.771825 kubelet[2095]: E1101 00:54:21.771788 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:21.772223 env[1305]: time="2025-11-01T00:54:21.772196973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:54:21.802147 systemd-networkd[1058]: calicf66231088a: Link UP Nov 1 00:54:21.805398 systemd-networkd[1058]: calicf66231088a: Gained carrier Nov 1 00:54:21.805830 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicf66231088a: link becomes ready Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.698 [INFO][4073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0 goldmane-666569f655- calico-system acf47117-3eb1-4aa3-89a4-bc9fecdad703 995 0 2025-11-01 00:53:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b goldmane-666569f655-j9dnh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicf66231088a [] [] }} ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.698 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.738 [INFO][4100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" HandleID="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.738 [INFO][4100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" HandleID="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"goldmane-666569f655-j9dnh", "timestamp":"2025-11-01 00:54:21.738413046 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.738 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.738 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.739 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.749 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.757 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.764 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.766 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.770 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.770 [INFO][4100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.780 [INFO][4100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.784 [INFO][4100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.791 [INFO][4100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.134/26] block=192.168.55.128/26 handle="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.791 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.134/26] handle="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.791 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:21.824992 env[1305]: 2025-11-01 00:54:21.791 [INFO][4100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.134/26] IPv6=[] ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" HandleID="k8s-pod-network.fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.796 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"acf47117-3eb1-4aa3-89a4-bc9fecdad703", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"goldmane-666569f655-j9dnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf66231088a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.797 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.134/32] ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.797 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf66231088a ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.806 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.806 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"acf47117-3eb1-4aa3-89a4-bc9fecdad703", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f", Pod:"goldmane-666569f655-j9dnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf66231088a", MAC:"7a:1c:80:b2:31:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:21.825734 env[1305]: 2025-11-01 00:54:21.823 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f" Namespace="calico-system" Pod="goldmane-666569f655-j9dnh" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:21.839045 env[1305]: time="2025-11-01T00:54:21.838962455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:21.839272 env[1305]: time="2025-11-01T00:54:21.839229140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:21.839386 env[1305]: time="2025-11-01T00:54:21.839361259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:21.839822 env[1305]: time="2025-11-01T00:54:21.839763533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f pid=4130 runtime=io.containerd.runc.v2 Nov 1 00:54:21.859000 audit[4151]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:21.859000 audit[4151]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc6bcba7e0 a2=0 a3=7ffc6bcba7cc items=0 ppid=2198 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:21.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:21.865000 audit[4150]: NETFILTER_CFG table=filter:119 family=2 entries=60 op=nft_register_chain pid=4150 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:21.865000 audit[4150]: SYSCALL arch=c000003e syscall=46 success=yes exit=29932 a0=3 a1=7ffd0ffc2840 a2=0 a3=7ffd0ffc282c items=0 ppid=3335 pid=4150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:21.865000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:21.884000 audit[4151]: NETFILTER_CFG table=nat:120 family=2 entries=56 op=nft_register_chain pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:21.884000 audit[4151]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc6bcba7e0 a2=0 a3=7ffc6bcba7cc items=0 ppid=2198 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:21.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:21.919869 systemd-networkd[1058]: cali97c3ef9aa97: Link UP Nov 1 00:54:21.927372 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:54:21.927456 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali97c3ef9aa97: link becomes ready Nov 1 00:54:21.931707 systemd-networkd[1058]: cali97c3ef9aa97: Gained carrier Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.709 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0 calico-apiserver-5f668d4ccf- calico-apiserver 0aeb6ff7-2d7d-423c-8068-1607bda1ebe8 996 0 2025-11-01 00:53:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f668d4ccf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b calico-apiserver-5f668d4ccf-gzvhz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97c3ef9aa97 [] [] }} ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.709 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.779 [INFO][4106] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" HandleID="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.779 [INFO][4106] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" HandleID="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032a5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"calico-apiserver-5f668d4ccf-gzvhz", "timestamp":"2025-11-01 00:54:21.779511092 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.779 [INFO][4106] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.792 [INFO][4106] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.793 [INFO][4106] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.851 [INFO][4106] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.858 [INFO][4106] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.866 [INFO][4106] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.869 [INFO][4106] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.872 [INFO][4106] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.872 [INFO][4106] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.877 [INFO][4106] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2 Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.897 [INFO][4106] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.908 [INFO][4106] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.135/26] block=192.168.55.128/26 handle="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.908 [INFO][4106] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.135/26] handle="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.908 [INFO][4106] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:21.952100 env[1305]: 2025-11-01 00:54:21.908 [INFO][4106] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.135/26] IPv6=[] ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" HandleID="k8s-pod-network.b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.911 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"calico-apiserver-5f668d4ccf-gzvhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c3ef9aa97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.911 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.135/32] ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.911 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97c3ef9aa97 ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.937 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.938 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2", Pod:"calico-apiserver-5f668d4ccf-gzvhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c3ef9aa97", MAC:"f2:9a:e2:5e:9f:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:21.953011 env[1305]: 2025-11-01 00:54:21.948 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-gzvhz" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:21.972139 env[1305]: time="2025-11-01T00:54:21.972095478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j9dnh,Uid:acf47117-3eb1-4aa3-89a4-bc9fecdad703,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f\"" Nov 1 00:54:21.980000 audit[4182]: NETFILTER_CFG table=filter:121 family=2 entries=70 op=nft_register_chain pid=4182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:21.980000 audit[4182]: SYSCALL arch=c000003e syscall=46 success=yes exit=34148 a0=3 a1=7ffc66811610 a2=0 a3=7ffc668115fc items=0 ppid=3335 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:21.980000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:21.984310 env[1305]: time="2025-11-01T00:54:21.984234424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:21.984464 env[1305]: time="2025-11-01T00:54:21.984436129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:21.984601 env[1305]: time="2025-11-01T00:54:21.984574581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:21.984918 env[1305]: time="2025-11-01T00:54:21.984886868Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2 pid=4183 runtime=io.containerd.runc.v2 Nov 1 00:54:22.064099 env[1305]: time="2025-11-01T00:54:22.064051496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:22.064936 env[1305]: time="2025-11-01T00:54:22.064891025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-gzvhz,Uid:0aeb6ff7-2d7d-423c-8068-1607bda1ebe8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2\"" Nov 1 00:54:22.065403 env[1305]: time="2025-11-01T00:54:22.065271259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:54:22.065695 kubelet[2095]: E1101 00:54:22.065652 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:54:22.065830 kubelet[2095]: E1101 00:54:22.065706 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:54:22.065972 kubelet[2095]: E1101 00:54:22.065927 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:22.067802 kubelet[2095]: E1101 00:54:22.067510 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:22.068271 env[1305]: time="2025-11-01T00:54:22.068237592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:54:22.080916 systemd-networkd[1058]: cali243fb163716: Gained IPv6LL Nov 1 00:54:22.370487 env[1305]: time="2025-11-01T00:54:22.370421165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:22.371340 env[1305]: time="2025-11-01T00:54:22.371274643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:54:22.371603 kubelet[2095]: E1101 00:54:22.371547 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:22.372033 kubelet[2095]: E1101 00:54:22.372006 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:22.372768 env[1305]: time="2025-11-01T00:54:22.372513004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:54:22.373959 kubelet[2095]: E1101 00:54:22.372626 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvfdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:22.375842 kubelet[2095]: E1101 00:54:22.375698 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:22.593040 systemd-networkd[1058]: cali56a43b874bd: Gained IPv6LL Nov 1 00:54:22.671819 env[1305]: time="2025-11-01T00:54:22.671617007Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:22.673131 env[1305]: time="2025-11-01T00:54:22.673071700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:54:22.673377 kubelet[2095]: E1101 00:54:22.673339 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:22.673555 kubelet[2095]: E1101 00:54:22.673516 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:22.673833 kubelet[2095]: E1101 00:54:22.673794 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj6f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:22.675199 kubelet[2095]: E1101 00:54:22.675171 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:22.752925 kubelet[2095]: E1101 00:54:22.752862 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:22.756231 kubelet[2095]: E1101 00:54:22.756204 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:22.757313 kubelet[2095]: E1101 00:54:22.757276 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:22.757674 kubelet[2095]: E1101 00:54:22.757631 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:22.758697 kubelet[2095]: E1101 00:54:22.758641 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:22.814000 audit[4218]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:22.814000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc45558c0 a2=0 a3=7fffc45558ac items=0 ppid=2198 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:22.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:22.820000 audit[4218]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:22.820000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffc45558c0 a2=0 a3=7fffc45558ac items=0 ppid=2198 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:22.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:22.848988 systemd-networkd[1058]: calicf66231088a: Gained IPv6LL Nov 1 00:54:23.361034 systemd-networkd[1058]: cali97c3ef9aa97: Gained IPv6LL Nov 1 00:54:23.466962 env[1305]: time="2025-11-01T00:54:23.466906616Z" level=info msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.515 [INFO][4229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.515 [INFO][4229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" iface="eth0" netns="/var/run/netns/cni-af71666e-a9e9-82aa-a054-f957e9b6e047" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.515 [INFO][4229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" iface="eth0" netns="/var/run/netns/cni-af71666e-a9e9-82aa-a054-f957e9b6e047" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.517 [INFO][4229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" iface="eth0" netns="/var/run/netns/cni-af71666e-a9e9-82aa-a054-f957e9b6e047" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.517 [INFO][4229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.517 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.540 [INFO][4236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.540 [INFO][4236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.541 [INFO][4236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.548 [WARNING][4236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.548 [INFO][4236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.550 [INFO][4236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:23.554613 env[1305]: 2025-11-01 00:54:23.552 [INFO][4229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:23.558358 systemd[1]: run-netns-cni\x2daf71666e\x2da9e9\x2d82aa\x2da054\x2df957e9b6e047.mount: Deactivated successfully. Nov 1 00:54:23.560408 env[1305]: time="2025-11-01T00:54:23.558942457Z" level=info msg="TearDown network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" successfully" Nov 1 00:54:23.560408 env[1305]: time="2025-11-01T00:54:23.558984079Z" level=info msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" returns successfully" Nov 1 00:54:23.560873 env[1305]: time="2025-11-01T00:54:23.560843035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-fmsxj,Uid:447c37d4-c1de-4035-a57b-b729047ea7fb,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:54:23.703848 systemd-networkd[1058]: calic34e69b9f78: Link UP Nov 1 00:54:23.706441 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:54:23.706525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic34e69b9f78: link becomes ready Nov 1 00:54:23.706646 systemd-networkd[1058]: calic34e69b9f78: Gained carrier Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.614 [INFO][4242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0 calico-apiserver-5f668d4ccf- calico-apiserver 447c37d4-c1de-4035-a57b-b729047ea7fb 1037 0 2025-11-01 00:53:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f668d4ccf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-0efaf8214b calico-apiserver-5f668d4ccf-fmsxj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic34e69b9f78 [] [] }} ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.614 [INFO][4242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.644 [INFO][4255] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" HandleID="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.645 [INFO][4255] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" HandleID="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-0efaf8214b", "pod":"calico-apiserver-5f668d4ccf-fmsxj", "timestamp":"2025-11-01 00:54:23.64494785 +0000 UTC"}, Hostname:"ci-3510.3.8-n-0efaf8214b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.645 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.645 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.645 [INFO][4255] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-0efaf8214b' Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.654 [INFO][4255] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.660 [INFO][4255] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.664 [INFO][4255] ipam/ipam.go 511: Trying affinity for 192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.666 [INFO][4255] ipam/ipam.go 158: Attempting to load block cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.675 [INFO][4255] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.55.128/26 host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.675 [INFO][4255] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.55.128/26 handle="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.677 [INFO][4255] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.682 [INFO][4255] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.55.128/26 handle="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.691 [INFO][4255] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.55.136/26] block=192.168.55.128/26 handle="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.691 [INFO][4255] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.55.136/26] handle="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" host="ci-3510.3.8-n-0efaf8214b" Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.691 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:23.727357 env[1305]: 2025-11-01 00:54:23.691 [INFO][4255] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.55.136/26] IPv6=[] ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" HandleID="k8s-pod-network.e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.693 [INFO][4242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"447c37d4-c1de-4035-a57b-b729047ea7fb", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"", Pod:"calico-apiserver-5f668d4ccf-fmsxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34e69b9f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.694 [INFO][4242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.55.136/32] ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.694 [INFO][4242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic34e69b9f78 ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.707 [INFO][4242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.707 [INFO][4242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"447c37d4-c1de-4035-a57b-b729047ea7fb", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea", Pod:"calico-apiserver-5f668d4ccf-fmsxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34e69b9f78", MAC:"1e:92:c3:d7:55:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:23.730484 env[1305]: 2025-11-01 00:54:23.722 [INFO][4242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea" Namespace="calico-apiserver" Pod="calico-apiserver-5f668d4ccf-fmsxj" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:23.740799 kernel: kauditd_printk_skb: 592 callbacks suppressed Nov 1 00:54:23.740902 kernel: audit: type=1325 audit(1761958463.733:445): table=filter:124 family=2 entries=67 op=nft_register_chain pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:23.740933 kernel: audit: type=1300 audit(1761958463.733:445): arch=c000003e syscall=46 success=yes exit=31868 a0=3 a1=7ffcb29ead50 a2=0 a3=7ffcb29ead3c items=0 ppid=3335 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.733000 audit[4266]: NETFILTER_CFG table=filter:124 family=2 entries=67 op=nft_register_chain pid=4266 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:54:23.733000 audit[4266]: SYSCALL arch=c000003e syscall=46 success=yes exit=31868 a0=3 a1=7ffcb29ead50 a2=0 a3=7ffcb29ead3c items=0 ppid=3335 pid=4266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.733000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:23.757833 kernel: audit: type=1327 audit(1761958463.733:445): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:54:23.759667 kubelet[2095]: E1101 00:54:23.759631 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:23.760139 kubelet[2095]: E1101 00:54:23.759687 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:23.779498 env[1305]: time="2025-11-01T00:54:23.779422363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:54:23.779498 env[1305]: time="2025-11-01T00:54:23.779500061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:54:23.779691 env[1305]: time="2025-11-01T00:54:23.779523673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:54:23.779691 env[1305]: time="2025-11-01T00:54:23.779650731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea pid=4278 runtime=io.containerd.runc.v2 Nov 1 00:54:23.842000 audit[4310]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:23.849366 kernel: audit: type=1325 audit(1761958463.842:446): table=filter:125 family=2 entries=14 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:23.842000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4894c6b0 a2=0 a3=7ffc4894c69c items=0 ppid=2198 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.859798 kernel: audit: type=1300 audit(1761958463.842:446): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc4894c6b0 a2=0 a3=7ffc4894c69c items=0 ppid=2198 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:23.850000 audit[4310]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:23.869823 kernel: audit: type=1327 audit(1761958463.842:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:23.869898 kernel: audit: type=1325 audit(1761958463.850:447): table=nat:126 family=2 entries=20 op=nft_register_rule pid=4310 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:23.872774 env[1305]: time="2025-11-01T00:54:23.871008926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f668d4ccf-fmsxj,Uid:447c37d4-c1de-4035-a57b-b729047ea7fb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea\"" Nov 1 00:54:23.850000 audit[4310]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4894c6b0 a2=0 a3=7ffc4894c69c items=0 ppid=2198 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.883507 kernel: audit: type=1300 audit(1761958463.850:447): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4894c6b0 a2=0 a3=7ffc4894c69c items=0 ppid=2198 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:23.883583 kernel: audit: type=1327 audit(1761958463.850:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:23.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:23.883651 env[1305]: time="2025-11-01T00:54:23.882945435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:54:24.193375 env[1305]: time="2025-11-01T00:54:24.193325404Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:24.194301 env[1305]: time="2025-11-01T00:54:24.194243830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:54:24.194696 kubelet[2095]: E1101 00:54:24.194644 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:24.194804 kubelet[2095]: E1101 00:54:24.194720 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:24.195186 kubelet[2095]: E1101 00:54:24.195139 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79wrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-fmsxj_calico-apiserver(447c37d4-c1de-4035-a57b-b729047ea7fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:24.196514 kubelet[2095]: E1101 00:54:24.196457 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:24.763621 kubelet[2095]: E1101 00:54:24.763569 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:24.809000 audit[4325]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:24.809000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd42b769b0 a2=0 a3=7ffd42b7699c items=0 ppid=2198 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:24.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:24.815794 kernel: audit: type=1325 audit(1761958464.809:448): table=filter:127 family=2 entries=14 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:24.818000 audit[4325]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:54:24.818000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd42b769b0 a2=0 a3=7ffd42b7699c items=0 ppid=2198 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:24.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:54:24.897139 systemd-networkd[1058]: calic34e69b9f78: Gained IPv6LL Nov 1 00:54:25.765475 kubelet[2095]: E1101 00:54:25.765420 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:27.334668 kubelet[2095]: I1101 00:54:27.334612 2095 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:54:27.335188 kubelet[2095]: E1101 00:54:27.335080 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:27.367994 systemd[1]: run-containerd-runc-k8s.io-eabb405608f3c39fa999d4d9f5c729f9b8fd113580e722669da852859a2bed21-runc.BUStV8.mount: Deactivated successfully. Nov 1 00:54:27.510288 systemd[1]: run-containerd-runc-k8s.io-eabb405608f3c39fa999d4d9f5c729f9b8fd113580e722669da852859a2bed21-runc.o2Nwtp.mount: Deactivated successfully. Nov 1 00:54:27.769293 kubelet[2095]: E1101 00:54:27.769215 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:31.467103 env[1305]: time="2025-11-01T00:54:31.466804356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:54:31.763987 env[1305]: time="2025-11-01T00:54:31.763821085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:31.765924 env[1305]: time="2025-11-01T00:54:31.765844529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:54:31.766373 kubelet[2095]: E1101 00:54:31.766319 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:31.766789 kubelet[2095]: E1101 00:54:31.766397 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:31.766789 kubelet[2095]: E1101 00:54:31.766536 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:124ea995052c4baba627fef25423b142,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:31.769146 env[1305]: time="2025-11-01T00:54:31.768882607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:54:32.054099 env[1305]: time="2025-11-01T00:54:32.053936946Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:32.055138 env[1305]: time="2025-11-01T00:54:32.055079036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:54:32.055426 kubelet[2095]: E1101 00:54:32.055374 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:32.055513 kubelet[2095]: E1101 00:54:32.055433 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:32.055611 kubelet[2095]: E1101 00:54:32.055564 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:32.057105 kubelet[2095]: E1101 00:54:32.057052 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:32.130686 systemd[1]: Started sshd@9-144.126.212.254:22-139.178.89.65:46474.service. Nov 1 00:54:32.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.254:22-139.178.89.65:46474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:32.133136 kernel: kauditd_printk_skb: 5 callbacks suppressed Nov 1 00:54:32.133236 kernel: audit: type=1130 audit(1761958472.130:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.254:22-139.178.89.65:46474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:32.236524 sshd[4380]: Accepted publickey for core from 139.178.89.65 port 46474 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:32.235000 audit[4380]: USER_ACCT pid=4380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.237000 audit[4380]: CRED_ACQ pid=4380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.252838 kernel: audit: type=1101 audit(1761958472.235:451): pid=4380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.252923 kernel: audit: type=1103 audit(1761958472.237:452): pid=4380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.257782 kernel: audit: type=1006 audit(1761958472.237:453): pid=4380 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Nov 1 00:54:32.237000 audit[4380]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0157b200 a2=3 a3=0 items=0 ppid=1 pid=4380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:32.237000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:32.268731 kernel: audit: type=1300 audit(1761958472.237:453): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0157b200 a2=3 a3=0 items=0 ppid=1 pid=4380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:32.268824 kernel: audit: type=1327 audit(1761958472.237:453): proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:32.269190 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:32.284849 systemd[1]: Started session-8.scope. Nov 1 00:54:32.285528 systemd-logind[1290]: New session 8 of user core. Nov 1 00:54:32.303000 audit[4380]: USER_START pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.313799 kernel: audit: type=1105 audit(1761958472.303:454): pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.303000 audit[4383]: CRED_ACQ pid=4383 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.324787 kernel: audit: type=1103 audit(1761958472.303:455): pid=4383 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:32.999621 sshd[4380]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:33.000000 audit[4380]: USER_END pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:33.011202 kernel: audit: type=1106 audit(1761958473.000:456): pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:33.010983 systemd[1]: sshd@9-144.126.212.254:22-139.178.89.65:46474.service: Deactivated successfully. Nov 1 00:54:33.000000 audit[4380]: CRED_DISP pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:33.013004 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:54:33.018968 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:54:33.019839 kernel: audit: type=1104 audit(1761958473.000:457): pid=4380 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:33.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-144.126.212.254:22-139.178.89.65:46474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:33.020365 systemd-logind[1290]: Removed session 8. Nov 1 00:54:35.467303 env[1305]: time="2025-11-01T00:54:35.467050911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:54:35.758801 env[1305]: time="2025-11-01T00:54:35.758516741Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:35.759659 env[1305]: time="2025-11-01T00:54:35.759542163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:54:35.760000 kubelet[2095]: E1101 00:54:35.759929 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:35.760367 kubelet[2095]: E1101 00:54:35.760010 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:35.760685 kubelet[2095]: E1101 00:54:35.760610 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvfdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:35.761899 kubelet[2095]: E1101 00:54:35.761849 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:36.439869 env[1305]: time="2025-11-01T00:54:36.439740546Z" level=info msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" Nov 1 00:54:36.480944 env[1305]: time="2025-11-01T00:54:36.480894151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.548 [WARNING][4405] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2", Pod:"calico-apiserver-5f668d4ccf-gzvhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c3ef9aa97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.549 [INFO][4405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.550 [INFO][4405] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" iface="eth0" netns="" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.550 [INFO][4405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.550 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.628 [INFO][4414] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.629 [INFO][4414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.629 [INFO][4414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.640 [WARNING][4414] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.640 [INFO][4414] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.641 [INFO][4414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:36.652869 env[1305]: 2025-11-01 00:54:36.647 [INFO][4405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.652869 env[1305]: time="2025-11-01T00:54:36.650928230Z" level=info msg="TearDown network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" successfully" Nov 1 00:54:36.652869 env[1305]: time="2025-11-01T00:54:36.650983210Z" level=info msg="StopPodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" returns successfully" Nov 1 00:54:36.652869 env[1305]: time="2025-11-01T00:54:36.651502600Z" level=info msg="RemovePodSandbox for \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" Nov 1 00:54:36.652869 env[1305]: time="2025-11-01T00:54:36.651535076Z" level=info msg="Forcibly stopping sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\"" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.704 [WARNING][4428] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0aeb6ff7-2d7d-423c-8068-1607bda1ebe8", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"b00dbdefbf53b2753454d9ecaaaed3e390e11b6a5f43595015509b94d928a5d2", Pod:"calico-apiserver-5f668d4ccf-gzvhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97c3ef9aa97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.704 [INFO][4428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.704 [INFO][4428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" iface="eth0" netns="" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.704 [INFO][4428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.704 [INFO][4428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.738 [INFO][4435] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.738 [INFO][4435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.738 [INFO][4435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.749 [WARNING][4435] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.749 [INFO][4435] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" HandleID="k8s-pod-network.ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--gzvhz-eth0" Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.751 [INFO][4435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:36.756692 env[1305]: 2025-11-01 00:54:36.754 [INFO][4428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b" Nov 1 00:54:36.756692 env[1305]: time="2025-11-01T00:54:36.756668487Z" level=info msg="TearDown network for sandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" successfully" Nov 1 00:54:36.761600 env[1305]: time="2025-11-01T00:54:36.761554024Z" level=info msg="RemovePodSandbox \"ccff4abf279d481fe758cf98b6a09e4c82985082166ebb004231c04ed8110c8b\" returns successfully" Nov 1 00:54:36.762232 env[1305]: time="2025-11-01T00:54:36.762195597Z" level=info msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" Nov 1 00:54:36.791538 env[1305]: time="2025-11-01T00:54:36.791497290Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:36.798123 env[1305]: time="2025-11-01T00:54:36.798053320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:54:36.799140 kubelet[2095]: E1101 00:54:36.798414 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:54:36.799140 kubelet[2095]: E1101 00:54:36.798501 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:54:36.799140 kubelet[2095]: E1101 00:54:36.799038 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmmkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85b568d67d-z4c8c_calico-system(0979e255-e4e9-4664-a95e-5354a9f7d531): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:36.800864 kubelet[2095]: E1101 00:54:36.800828 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.886 [WARNING][4450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"04f4ba43-b773-4444-b355-28563af8171b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5", Pod:"coredns-668d6bf9bc-95hcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf827e12b10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.886 [INFO][4450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.886 [INFO][4450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" iface="eth0" netns="" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.886 [INFO][4450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.886 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.952 [INFO][4457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.952 [INFO][4457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.952 [INFO][4457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.964 [WARNING][4457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.964 [INFO][4457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.966 [INFO][4457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:36.970012 env[1305]: 2025-11-01 00:54:36.968 [INFO][4450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:36.970582 env[1305]: time="2025-11-01T00:54:36.970043602Z" level=info msg="TearDown network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" successfully" Nov 1 00:54:36.970582 env[1305]: time="2025-11-01T00:54:36.970075477Z" level=info msg="StopPodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" returns successfully" Nov 1 00:54:36.970582 env[1305]: time="2025-11-01T00:54:36.970541062Z" level=info msg="RemovePodSandbox for \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" Nov 1 00:54:36.970713 env[1305]: time="2025-11-01T00:54:36.970570573Z" level=info msg="Forcibly stopping sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\"" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.025 [WARNING][4471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"04f4ba43-b773-4444-b355-28563af8171b", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"a1c082772d96d5cfbab1ee96c6e97fa58c9c28ab6ac98e31ed6ece0cd02369c5", Pod:"coredns-668d6bf9bc-95hcw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf827e12b10", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.025 [INFO][4471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.025 [INFO][4471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" iface="eth0" netns="" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.025 [INFO][4471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.026 [INFO][4471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.082 [INFO][4479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.082 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.082 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.088 [WARNING][4479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.088 [INFO][4479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" HandleID="k8s-pod-network.d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--95hcw-eth0" Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.094 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.098895 env[1305]: 2025-11-01 00:54:37.096 [INFO][4471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf" Nov 1 00:54:37.099568 env[1305]: time="2025-11-01T00:54:37.099520651Z" level=info msg="TearDown network for sandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" successfully" Nov 1 00:54:37.108939 env[1305]: time="2025-11-01T00:54:37.108889763Z" level=info msg="RemovePodSandbox \"d44b499bf360f5f24ec2911ec851a0ee95d2d5f2107e00217cff1abcc2259caf\" returns successfully" Nov 1 00:54:37.109656 env[1305]: time="2025-11-01T00:54:37.109629331Z" level=info msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.154 [WARNING][4493] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0", GenerateName:"calico-kube-controllers-85b568d67d-", Namespace:"calico-system", SelfLink:"", UID:"0979e255-e4e9-4664-a95e-5354a9f7d531", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b568d67d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c", Pod:"calico-kube-controllers-85b568d67d-z4c8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali243fb163716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.154 [INFO][4493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.154 [INFO][4493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" iface="eth0" netns="" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.154 [INFO][4493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.154 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.184 [INFO][4500] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.184 [INFO][4500] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.185 [INFO][4500] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.192 [WARNING][4500] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.192 [INFO][4500] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.193 [INFO][4500] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.197930 env[1305]: 2025-11-01 00:54:37.195 [INFO][4493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.198576 env[1305]: time="2025-11-01T00:54:37.197965739Z" level=info msg="TearDown network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" successfully" Nov 1 00:54:37.198576 env[1305]: time="2025-11-01T00:54:37.197999387Z" level=info msg="StopPodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" returns successfully" Nov 1 00:54:37.199240 env[1305]: time="2025-11-01T00:54:37.199212003Z" level=info msg="RemovePodSandbox for \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" Nov 1 00:54:37.199400 env[1305]: time="2025-11-01T00:54:37.199357747Z" level=info msg="Forcibly stopping sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\"" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.270 [WARNING][4516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0", GenerateName:"calico-kube-controllers-85b568d67d-", Namespace:"calico-system", SelfLink:"", UID:"0979e255-e4e9-4664-a95e-5354a9f7d531", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85b568d67d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9ff4f2351553a739177c828475227263cf3a64a6253900ba16d947869bb2f11c", Pod:"calico-kube-controllers-85b568d67d-z4c8c", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.55.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali243fb163716", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.270 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.270 [INFO][4516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" iface="eth0" netns="" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.270 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.270 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.301 [INFO][4523] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.301 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.301 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.308 [WARNING][4523] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.308 [INFO][4523] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" HandleID="k8s-pod-network.a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--kube--controllers--85b568d67d--z4c8c-eth0" Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.309 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.315822 env[1305]: 2025-11-01 00:54:37.311 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4" Nov 1 00:54:37.315822 env[1305]: time="2025-11-01T00:54:37.313440227Z" level=info msg="TearDown network for sandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" successfully" Nov 1 00:54:37.317486 env[1305]: time="2025-11-01T00:54:37.317442039Z" level=info msg="RemovePodSandbox \"a71c47bf229b80276f24cc7651603be913775ecceb2cc3f939c907c10c7a4fd4\" returns successfully" Nov 1 00:54:37.318124 env[1305]: time="2025-11-01T00:54:37.318099761Z" level=info msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" Nov 1 00:54:37.468782 env[1305]: time="2025-11-01T00:54:37.468725018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.388 [WARNING][4537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b08705e4-7a04-4c33-a8c8-a3f67298574d", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012", Pod:"csi-node-driver-twt7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56a43b874bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.388 [INFO][4537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.388 [INFO][4537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" iface="eth0" netns="" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.388 [INFO][4537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.388 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.472 [INFO][4544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.472 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.475 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.494 [WARNING][4544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.494 [INFO][4544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.515 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.526676 env[1305]: 2025-11-01 00:54:37.518 [INFO][4537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.527650 env[1305]: time="2025-11-01T00:54:37.527605359Z" level=info msg="TearDown network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" successfully" Nov 1 00:54:37.527733 env[1305]: time="2025-11-01T00:54:37.527713532Z" level=info msg="StopPodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" returns successfully" Nov 1 00:54:37.528516 env[1305]: time="2025-11-01T00:54:37.528482363Z" level=info msg="RemovePodSandbox for \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" Nov 1 00:54:37.528611 env[1305]: time="2025-11-01T00:54:37.528545439Z" level=info msg="Forcibly stopping sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\"" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.615 [WARNING][4558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b08705e4-7a04-4c33-a8c8-a3f67298574d", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"2b76c79b62ff7a865347dc2f0ee6e71abb1485386213834c53c46352e3656012", Pod:"csi-node-driver-twt7m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.55.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56a43b874bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.615 [INFO][4558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.616 [INFO][4558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" iface="eth0" netns="" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.616 [INFO][4558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.616 [INFO][4558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.662 [INFO][4565] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.662 [INFO][4565] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.662 [INFO][4565] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.670 [WARNING][4565] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.670 [INFO][4565] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" HandleID="k8s-pod-network.64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Workload="ci--3510.3.8--n--0efaf8214b-k8s-csi--node--driver--twt7m-eth0" Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.672 [INFO][4565] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.677461 env[1305]: 2025-11-01 00:54:37.675 [INFO][4558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9" Nov 1 00:54:37.678179 env[1305]: time="2025-11-01T00:54:37.678142474Z" level=info msg="TearDown network for sandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" successfully" Nov 1 00:54:37.680924 env[1305]: time="2025-11-01T00:54:37.680890817Z" level=info msg="RemovePodSandbox \"64b90828beb527fb7df55cab820b5b862fa0a86b6bce440a5f19677415b2d1b9\" returns successfully" Nov 1 00:54:37.681795 env[1305]: time="2025-11-01T00:54:37.681737770Z" level=info msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" Nov 1 00:54:37.760867 env[1305]: time="2025-11-01T00:54:37.757850951Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:37.760867 env[1305]: time="2025-11-01T00:54:37.758780584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:54:37.760867 env[1305]: time="2025-11-01T00:54:37.760083441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:54:37.761208 kubelet[2095]: E1101 00:54:37.759081 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:54:37.761208 kubelet[2095]: E1101 00:54:37.759158 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:54:37.761208 kubelet[2095]: E1101 00:54:37.759468 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.769 [WARNING][4579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"acf47117-3eb1-4aa3-89a4-bc9fecdad703", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f", Pod:"goldmane-666569f655-j9dnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf66231088a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.769 [INFO][4579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.769 [INFO][4579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" iface="eth0" netns="" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.769 [INFO][4579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.769 [INFO][4579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.836 [INFO][4586] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.836 [INFO][4586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.837 [INFO][4586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.845 [WARNING][4586] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.845 [INFO][4586] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.847 [INFO][4586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.850818 env[1305]: 2025-11-01 00:54:37.848 [INFO][4579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.851772 env[1305]: time="2025-11-01T00:54:37.850840312Z" level=info msg="TearDown network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" successfully" Nov 1 00:54:37.851772 env[1305]: time="2025-11-01T00:54:37.850869818Z" level=info msg="StopPodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" returns successfully" Nov 1 00:54:37.851772 env[1305]: time="2025-11-01T00:54:37.851311544Z" level=info msg="RemovePodSandbox for \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" Nov 1 00:54:37.851772 env[1305]: time="2025-11-01T00:54:37.851342073Z" level=info msg="Forcibly stopping sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\"" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.917 [WARNING][4601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"acf47117-3eb1-4aa3-89a4-bc9fecdad703", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"fb2160212222fae5eecf19d10e155447b4a730720a9df910708389838a5ec59f", Pod:"goldmane-666569f655-j9dnh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.55.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf66231088a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.917 [INFO][4601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.917 [INFO][4601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" iface="eth0" netns="" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.917 [INFO][4601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.917 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.959 [INFO][4609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.960 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.960 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.966 [WARNING][4609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.966 [INFO][4609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" HandleID="k8s-pod-network.bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Workload="ci--3510.3.8--n--0efaf8214b-k8s-goldmane--666569f655--j9dnh-eth0" Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.976 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:37.982233 env[1305]: 2025-11-01 00:54:37.978 [INFO][4601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6" Nov 1 00:54:37.982933 env[1305]: time="2025-11-01T00:54:37.982883822Z" level=info msg="TearDown network for sandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" successfully" Nov 1 00:54:37.985891 env[1305]: time="2025-11-01T00:54:37.985845352Z" level=info msg="RemovePodSandbox \"bdf1219825ffb0504da66c39f49f8fbd55336a1f4e428b159b218ea10f5222a6\" returns successfully" Nov 1 00:54:37.987218 env[1305]: time="2025-11-01T00:54:37.987190472Z" level=info msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" Nov 1 00:54:38.004998 systemd[1]: Started sshd@10-144.126.212.254:22-139.178.89.65:50032.service. Nov 1 00:54:38.014464 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:54:38.014591 kernel: audit: type=1130 audit(1761958478.004:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-144.126.212.254:22-139.178.89.65:50032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:38.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-144.126.212.254:22-139.178.89.65:50032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:38.066202 env[1305]: time="2025-11-01T00:54:38.066017251Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:38.071006 env[1305]: time="2025-11-01T00:54:38.070248591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:54:38.071805 kubelet[2095]: E1101 00:54:38.071715 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:38.072292 kubelet[2095]: E1101 00:54:38.071835 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:38.072292 kubelet[2095]: E1101 00:54:38.072161 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj6f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:38.073782 kubelet[2095]: E1101 00:54:38.073678 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:38.076198 env[1305]: time="2025-11-01T00:54:38.076020181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:54:38.154000 audit[4629]: USER_ACCT pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.165378 kernel: audit: type=1101 audit(1761958478.154:460): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.165462 sshd[4629]: Accepted publickey for core from 139.178.89.65 port 50032 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:38.165000 audit[4629]: CRED_ACQ pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.178309 kernel: audit: type=1103 audit(1761958478.165:461): pid=4629 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.179389 sshd[4629]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:38.192081 kernel: audit: type=1006 audit(1761958478.166:462): pid=4629 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Nov 1 00:54:38.192183 kernel: audit: type=1300 audit(1761958478.166:462): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff52153d70 a2=3 a3=0 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:38.166000 audit[4629]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff52153d70 a2=3 a3=0 items=0 ppid=1 pid=4629 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:38.166000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:38.205670 systemd[1]: Started session-9.scope. Nov 1 00:54:38.211131 kernel: audit: type=1327 audit(1761958478.166:462): proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:38.209248 systemd-logind[1290]: New session 9 of user core. Nov 1 00:54:38.229000 audit[4629]: USER_START pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.251906 kernel: audit: type=1105 audit(1761958478.229:463): pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.251000 audit[4640]: CRED_ACQ pid=4640 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.264457 kernel: audit: type=1103 audit(1761958478.251:464): pid=4640 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.102 [WARNING][4624] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.102 [INFO][4624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.102 [INFO][4624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" iface="eth0" netns="" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.102 [INFO][4624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.102 [INFO][4624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.185 [INFO][4633] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.191 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.191 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.226 [WARNING][4633] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.226 [INFO][4633] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.252 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:38.269816 env[1305]: 2025-11-01 00:54:38.260 [INFO][4624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.269816 env[1305]: time="2025-11-01T00:54:38.269004826Z" level=info msg="TearDown network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" successfully" Nov 1 00:54:38.269816 env[1305]: time="2025-11-01T00:54:38.269057834Z" level=info msg="StopPodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" returns successfully" Nov 1 00:54:38.271015 env[1305]: time="2025-11-01T00:54:38.270974747Z" level=info msg="RemovePodSandbox for \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" Nov 1 00:54:38.271100 env[1305]: time="2025-11-01T00:54:38.271023879Z" level=info msg="Forcibly stopping sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\"" Nov 1 00:54:38.388585 env[1305]: time="2025-11-01T00:54:38.388460082Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:38.389868 env[1305]: time="2025-11-01T00:54:38.389804349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:54:38.390884 kubelet[2095]: E1101 00:54:38.390253 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:54:38.390884 kubelet[2095]: E1101 00:54:38.390308 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:54:38.390884 kubelet[2095]: E1101 00:54:38.390456 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:38.400175 kubelet[2095]: E1101 00:54:38.400132 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:38.469085 env[1305]: time="2025-11-01T00:54:38.468127907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.346 [WARNING][4651] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" WorkloadEndpoint="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.346 [INFO][4651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.346 [INFO][4651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" iface="eth0" netns="" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.346 [INFO][4651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.346 [INFO][4651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.443 [INFO][4663] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.447 [INFO][4663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.447 [INFO][4663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.459 [WARNING][4663] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.459 [INFO][4663] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" HandleID="k8s-pod-network.421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Workload="ci--3510.3.8--n--0efaf8214b-k8s-whisker--56478f7ccd--qkwt8-eth0" Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.462 [INFO][4663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:38.483345 env[1305]: 2025-11-01 00:54:38.477 [INFO][4651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773" Nov 1 00:54:38.483345 env[1305]: time="2025-11-01T00:54:38.482571580Z" level=info msg="TearDown network for sandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" successfully" Nov 1 00:54:38.485528 env[1305]: time="2025-11-01T00:54:38.485445536Z" level=info msg="RemovePodSandbox \"421b6723aed3cd2f0f62334a373da8e89499ded19c5999e2874189c1e0e9a773\" returns successfully" Nov 1 00:54:38.486801 env[1305]: time="2025-11-01T00:54:38.485967515Z" level=info msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.637 [WARNING][4680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91b31c91-0235-44c1-8490-69cf1d3604f2", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b", Pod:"coredns-668d6bf9bc-hbw54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad39d8f5cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.640 [INFO][4680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.640 [INFO][4680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" iface="eth0" netns="" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.641 [INFO][4680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.641 [INFO][4680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.738 [INFO][4687] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.739 [INFO][4687] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.739 [INFO][4687] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.753 [WARNING][4687] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.753 [INFO][4687] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.755 [INFO][4687] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:38.760002 env[1305]: 2025-11-01 00:54:38.758 [INFO][4680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.761417 env[1305]: time="2025-11-01T00:54:38.760042325Z" level=info msg="TearDown network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" successfully" Nov 1 00:54:38.761417 env[1305]: time="2025-11-01T00:54:38.760074270Z" level=info msg="StopPodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" returns successfully" Nov 1 00:54:38.761417 env[1305]: time="2025-11-01T00:54:38.760898370Z" level=info msg="RemovePodSandbox for \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" Nov 1 00:54:38.761417 env[1305]: time="2025-11-01T00:54:38.760930198Z" level=info msg="Forcibly stopping sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\"" Nov 1 00:54:38.773239 sshd[4629]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:38.784804 kernel: audit: type=1106 audit(1761958478.775:465): pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.775000 audit[4629]: USER_END pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.785022 env[1305]: time="2025-11-01T00:54:38.780336344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:38.786366 env[1305]: time="2025-11-01T00:54:38.785888216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:54:38.787246 kubelet[2095]: E1101 00:54:38.786100 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:38.787246 kubelet[2095]: E1101 00:54:38.786157 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:54:38.787246 kubelet[2095]: E1101 00:54:38.786307 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79wrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-fmsxj_calico-apiserver(447c37d4-c1de-4035-a57b-b729047ea7fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:38.788105 systemd[1]: sshd@10-144.126.212.254:22-139.178.89.65:50032.service: Deactivated successfully. Nov 1 00:54:38.788998 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:54:38.791824 kubelet[2095]: E1101 00:54:38.789519 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:38.790444 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:54:38.791599 systemd-logind[1290]: Removed session 9. Nov 1 00:54:38.784000 audit[4629]: CRED_DISP pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.799797 kernel: audit: type=1104 audit(1761958478.784:466): pid=4629 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:38.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-144.126.212.254:22-139.178.89.65:50032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.834 [WARNING][4702] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"91b31c91-0235-44c1-8490-69cf1d3604f2", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"9804c478642238c7b5993d8e52d82a916cfed30827c351f0768cd0121f09ee6b", Pod:"coredns-668d6bf9bc-hbw54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.55.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibad39d8f5cf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.835 [INFO][4702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.835 [INFO][4702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" iface="eth0" netns="" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.835 [INFO][4702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.835 [INFO][4702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.892 [INFO][4711] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.892 [INFO][4711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.892 [INFO][4711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.900 [WARNING][4711] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.900 [INFO][4711] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" HandleID="k8s-pod-network.418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Workload="ci--3510.3.8--n--0efaf8214b-k8s-coredns--668d6bf9bc--hbw54-eth0" Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.902 [INFO][4711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:38.906011 env[1305]: 2025-11-01 00:54:38.904 [INFO][4702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa" Nov 1 00:54:38.906953 env[1305]: time="2025-11-01T00:54:38.906029142Z" level=info msg="TearDown network for sandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" successfully" Nov 1 00:54:38.908707 env[1305]: time="2025-11-01T00:54:38.908674213Z" level=info msg="RemovePodSandbox \"418d6121b43496d539dd60c11a3eccbd66c28dbcab065ad62ba4cbb1ed4d68aa\" returns successfully" Nov 1 00:54:38.909223 env[1305]: time="2025-11-01T00:54:38.909196339Z" level=info msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.960 [WARNING][4726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"447c37d4-c1de-4035-a57b-b729047ea7fb", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea", Pod:"calico-apiserver-5f668d4ccf-fmsxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34e69b9f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.960 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.960 [INFO][4726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" iface="eth0" netns="" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.960 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.960 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.993 [INFO][4733] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.994 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:38.994 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:39.003 [WARNING][4733] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:39.003 [INFO][4733] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:39.005 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:39.009015 env[1305]: 2025-11-01 00:54:39.007 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.009584 env[1305]: time="2025-11-01T00:54:39.009052814Z" level=info msg="TearDown network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" successfully" Nov 1 00:54:39.009584 env[1305]: time="2025-11-01T00:54:39.009090232Z" level=info msg="StopPodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" returns successfully" Nov 1 00:54:39.009644 env[1305]: time="2025-11-01T00:54:39.009612057Z" level=info msg="RemovePodSandbox for \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" Nov 1 00:54:39.009677 env[1305]: time="2025-11-01T00:54:39.009643809Z" level=info msg="Forcibly stopping sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\"" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.060 [WARNING][4750] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0", GenerateName:"calico-apiserver-5f668d4ccf-", Namespace:"calico-apiserver", SelfLink:"", UID:"447c37d4-c1de-4035-a57b-b729047ea7fb", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 53, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f668d4ccf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-0efaf8214b", ContainerID:"e1cb9232d922a98decccb53b78940aa2564ffbe25eeddd711a1a7c5146e18bea", Pod:"calico-apiserver-5f668d4ccf-fmsxj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.55.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic34e69b9f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.061 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.061 [INFO][4750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" iface="eth0" netns="" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.061 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.061 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.097 [INFO][4757] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.097 [INFO][4757] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.097 [INFO][4757] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.108 [WARNING][4757] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.108 [INFO][4757] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" HandleID="k8s-pod-network.2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Workload="ci--3510.3.8--n--0efaf8214b-k8s-calico--apiserver--5f668d4ccf--fmsxj-eth0" Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.112 [INFO][4757] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:54:39.117362 env[1305]: 2025-11-01 00:54:39.115 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804" Nov 1 00:54:39.118076 env[1305]: time="2025-11-01T00:54:39.118026126Z" level=info msg="TearDown network for sandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" successfully" Nov 1 00:54:39.121920 env[1305]: time="2025-11-01T00:54:39.121883285Z" level=info msg="RemovePodSandbox \"2a13d0cdc8990d99ba81d01c0c9b313b654101f1dd097a248bb33976f2d75804\" returns successfully" Nov 1 00:54:43.777364 systemd[1]: Started sshd@11-144.126.212.254:22-139.178.89.65:50040.service. Nov 1 00:54:43.782777 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:54:43.782893 kernel: audit: type=1130 audit(1761958483.776:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-144.126.212.254:22-139.178.89.65:50040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:43.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-144.126.212.254:22-139.178.89.65:50040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:43.838000 audit[4773]: USER_ACCT pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.844646 sshd[4773]: Accepted publickey for core from 139.178.89.65 port 50040 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:43.846888 kernel: audit: type=1101 audit(1761958483.838:469): pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.846000 audit[4773]: CRED_ACQ pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.847388 sshd[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:43.859568 kernel: audit: type=1103 audit(1761958483.846:470): pid=4773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.859694 kernel: audit: type=1006 audit(1761958483.846:471): pid=4773 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:54:43.846000 audit[4773]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8aa27890 a2=3 a3=0 items=0 ppid=1 pid=4773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:43.869776 kernel: audit: type=1300 audit(1761958483.846:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8aa27890 a2=3 a3=0 items=0 ppid=1 pid=4773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:43.846000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:43.875211 kernel: audit: type=1327 audit(1761958483.846:471): proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:43.875969 systemd-logind[1290]: New session 10 of user core. Nov 1 00:54:43.876892 systemd[1]: Started session-10.scope. Nov 1 00:54:43.883000 audit[4773]: USER_START pid=4773 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.893785 kernel: audit: type=1105 audit(1761958483.883:472): pid=4773 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.895000 audit[4776]: CRED_ACQ pid=4776 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:43.905775 kernel: audit: type=1103 audit(1761958483.895:473): pid=4776 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.036195 sshd[4773]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:44.036000 audit[4773]: USER_END pid=4773 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.040947 systemd[1]: Started sshd@12-144.126.212.254:22-139.178.89.65:50044.service. Nov 1 00:54:44.046780 kernel: audit: type=1106 audit(1761958484.036:474): pid=4773 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.254:22-139.178.89.65:50044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.049954 systemd[1]: sshd@11-144.126.212.254:22-139.178.89.65:50040.service: Deactivated successfully. Nov 1 00:54:44.050842 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:54:44.056775 kernel: audit: type=1130 audit(1761958484.047:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.254:22-139.178.89.65:50044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.059433 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:54:44.061280 systemd-logind[1290]: Removed session 10. Nov 1 00:54:44.047000 audit[4773]: CRED_DISP pid=4773 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-144.126.212.254:22-139.178.89.65:50040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.097000 audit[4784]: USER_ACCT pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.098354 sshd[4784]: Accepted publickey for core from 139.178.89.65 port 50044 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:44.098000 audit[4784]: CRED_ACQ pid=4784 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.098000 audit[4784]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf7e60820 a2=3 a3=0 items=0 ppid=1 pid=4784 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:44.098000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:44.099797 sshd[4784]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:44.105789 systemd[1]: Started session-11.scope. Nov 1 00:54:44.106156 systemd-logind[1290]: New session 11 of user core. Nov 1 00:54:44.115000 audit[4784]: USER_START pid=4784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.118000 audit[4789]: CRED_ACQ pid=4789 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.460528 sshd[4784]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:44.461000 audit[4784]: USER_END pid=4784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.461000 audit[4784]: CRED_DISP pid=4784 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.465301 systemd[1]: Started sshd@13-144.126.212.254:22-139.178.89.65:50058.service. Nov 1 00:54:44.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-144.126.212.254:22-139.178.89.65:50058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.469332 kubelet[2095]: E1101 00:54:44.469175 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:44.471040 systemd[1]: sshd@12-144.126.212.254:22-139.178.89.65:50044.service: Deactivated successfully. Nov 1 00:54:44.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-144.126.212.254:22-139.178.89.65:50044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.475891 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:54:44.481956 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:54:44.485068 systemd-logind[1290]: Removed session 11. Nov 1 00:54:44.544000 audit[4795]: USER_ACCT pid=4795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.545541 sshd[4795]: Accepted publickey for core from 139.178.89.65 port 50058 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:44.546000 audit[4795]: CRED_ACQ pid=4795 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.546000 audit[4795]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff73bd28f0 a2=3 a3=0 items=0 ppid=1 pid=4795 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:44.546000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:44.547388 sshd[4795]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:44.553425 systemd-logind[1290]: New session 12 of user core. Nov 1 00:54:44.554268 systemd[1]: Started session-12.scope. Nov 1 00:54:44.566000 audit[4795]: USER_START pid=4795 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.568000 audit[4800]: CRED_ACQ pid=4800 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.740685 sshd[4795]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:44.743000 audit[4795]: USER_END pid=4795 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.743000 audit[4795]: CRED_DISP pid=4795 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:44.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-144.126.212.254:22-139.178.89.65:50058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:44.745904 systemd[1]: sshd@13-144.126.212.254:22-139.178.89.65:50058.service: Deactivated successfully. Nov 1 00:54:44.747354 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:54:44.747884 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:54:44.748915 systemd-logind[1290]: Removed session 12. Nov 1 00:54:47.467084 kubelet[2095]: E1101 00:54:47.467038 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:54:48.471160 kubelet[2095]: E1101 00:54:48.471102 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:54:49.756298 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:54:49.756453 kernel: audit: type=1130 audit(1761958489.746:495): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.254:22-139.178.89.65:48126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:49.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.254:22-139.178.89.65:48126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:49.747156 systemd[1]: Started sshd@14-144.126.212.254:22-139.178.89.65:48126.service. Nov 1 00:54:49.809000 audit[4810]: USER_ACCT pid=4810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.812293 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:49.818649 sshd[4810]: Accepted publickey for core from 139.178.89.65 port 48126 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:49.818824 kernel: audit: type=1101 audit(1761958489.809:496): pid=4810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.811000 audit[4810]: CRED_ACQ pid=4810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.826839 kernel: audit: type=1103 audit(1761958489.811:497): pid=4810 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.830317 systemd-logind[1290]: New session 13 of user core. Nov 1 00:54:49.831409 systemd[1]: Started session-13.scope. Nov 1 00:54:49.857783 kernel: audit: type=1006 audit(1761958489.811:498): pid=4810 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:54:49.811000 audit[4810]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff134b4b60 a2=3 a3=0 items=0 ppid=1 pid=4810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:49.866786 kernel: audit: type=1300 audit(1761958489.811:498): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff134b4b60 a2=3 a3=0 items=0 ppid=1 pid=4810 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:49.811000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:49.880273 kernel: audit: type=1327 audit(1761958489.811:498): proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:49.880371 kernel: audit: type=1105 audit(1761958489.836:499): pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.836000 audit[4810]: USER_START pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.838000 audit[4813]: CRED_ACQ pid=4813 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:49.890033 kernel: audit: type=1103 audit(1761958489.838:500): pid=4813 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:50.012154 sshd[4810]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:50.014000 audit[4810]: USER_END pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:50.016566 systemd[1]: sshd@14-144.126.212.254:22-139.178.89.65:48126.service: Deactivated successfully. Nov 1 00:54:50.023821 kernel: audit: type=1106 audit(1761958490.014:501): pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:50.017465 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:54:50.025082 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:54:50.026214 systemd-logind[1290]: Removed session 13. Nov 1 00:54:50.014000 audit[4810]: CRED_DISP pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:50.035787 kernel: audit: type=1104 audit(1761958490.014:502): pid=4810 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:50.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-144.126.212.254:22-139.178.89.65:48126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:50.470352 kubelet[2095]: E1101 00:54:50.470306 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:54:50.470877 kubelet[2095]: E1101 00:54:50.470499 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:54:50.472108 kubelet[2095]: E1101 00:54:50.472027 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:54:53.467764 kubelet[2095]: E1101 00:54:53.467703 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:53.468645 kubelet[2095]: E1101 00:54:53.468611 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:54:55.017627 systemd[1]: Started sshd@15-144.126.212.254:22-139.178.89.65:48134.service. Nov 1 00:54:55.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.254:22-139.178.89.65:48134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:55.020372 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:54:55.020441 kernel: audit: type=1130 audit(1761958495.016:504): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.254:22-139.178.89.65:48134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:55.079000 audit[4828]: USER_ACCT pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.082562 sshd[4828]: Accepted publickey for core from 139.178.89.65 port 48134 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:54:55.089650 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:54:55.090130 kernel: audit: type=1101 audit(1761958495.079:505): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.087000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.098774 kernel: audit: type=1103 audit(1761958495.087:506): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.087000 audit[4828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe024a3340 a2=3 a3=0 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:55.107457 systemd[1]: Started session-14.scope. Nov 1 00:54:55.108705 systemd-logind[1290]: New session 14 of user core. Nov 1 00:54:55.111991 kernel: audit: type=1006 audit(1761958495.087:507): pid=4828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 00:54:55.112118 kernel: audit: type=1300 audit(1761958495.087:507): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe024a3340 a2=3 a3=0 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:54:55.087000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:55.113000 audit[4828]: USER_START pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.134744 kernel: audit: type=1327 audit(1761958495.087:507): proctitle=737368643A20636F7265205B707269765D Nov 1 00:54:55.134832 kernel: audit: type=1105 audit(1761958495.113:508): pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.114000 audit[4831]: CRED_ACQ pid=4831 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.141816 kernel: audit: type=1103 audit(1761958495.114:509): pid=4831 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.237680 sshd[4828]: pam_unix(sshd:session): session closed for user core Nov 1 00:54:55.237000 audit[4828]: USER_END pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.239000 audit[4828]: CRED_DISP pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.254948 kernel: audit: type=1106 audit(1761958495.237:510): pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.255054 kernel: audit: type=1104 audit(1761958495.239:511): pid=4828 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:54:55.255448 systemd[1]: sshd@15-144.126.212.254:22-139.178.89.65:48134.service: Deactivated successfully. Nov 1 00:54:55.256315 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:54:55.257739 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:54:55.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-144.126.212.254:22-139.178.89.65:48134 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:54:55.258890 systemd-logind[1290]: Removed session 14. Nov 1 00:54:56.467629 env[1305]: time="2025-11-01T00:54:56.467587654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:54:56.786266 env[1305]: time="2025-11-01T00:54:56.786113756Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:56.826479 env[1305]: time="2025-11-01T00:54:56.826392126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:54:56.826696 kubelet[2095]: E1101 00:54:56.826657 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:56.827070 kubelet[2095]: E1101 00:54:56.826711 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:54:56.827070 kubelet[2095]: E1101 00:54:56.826884 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:124ea995052c4baba627fef25423b142,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:56.829462 env[1305]: time="2025-11-01T00:54:56.829426847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:54:57.127437 env[1305]: time="2025-11-01T00:54:57.127089578Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:57.128524 env[1305]: time="2025-11-01T00:54:57.128364353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:54:57.128847 kubelet[2095]: E1101 00:54:57.128799 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:57.128950 kubelet[2095]: E1101 00:54:57.128875 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:54:57.129093 kubelet[2095]: E1101 00:54:57.129032 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5bfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6dfb57dc84-knf65_calico-system(0234b74a-300a-4772-b752-16560b6b9a9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:57.130419 kubelet[2095]: E1101 00:54:57.130381 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:54:58.468538 env[1305]: time="2025-11-01T00:54:58.468499774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:54:58.768337 env[1305]: time="2025-11-01T00:54:58.768175097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:54:58.769099 env[1305]: time="2025-11-01T00:54:58.769042948Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:54:58.769498 kubelet[2095]: E1101 00:54:58.769448 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:58.770094 kubelet[2095]: E1101 00:54:58.770053 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:54:58.770504 kubelet[2095]: E1101 00:54:58.770422 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvfdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:54:58.771963 kubelet[2095]: E1101 00:54:58.771918 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:55:00.247568 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:00.247832 kernel: audit: type=1130 audit(1761958500.243:513): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-144.126.212.254:22-139.178.89.65:52706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:00.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-144.126.212.254:22-139.178.89.65:52706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:00.244950 systemd[1]: Started sshd@16-144.126.212.254:22-139.178.89.65:52706.service. Nov 1 00:55:00.311054 sshd[4866]: Accepted publickey for core from 139.178.89.65 port 52706 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:00.309000 audit[4866]: USER_ACCT pid=4866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.320789 kernel: audit: type=1101 audit(1761958500.309:514): pid=4866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.321195 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:00.310000 audit[4866]: CRED_ACQ pid=4866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.332785 kernel: audit: type=1103 audit(1761958500.310:515): pid=4866 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.334048 systemd-logind[1290]: New session 15 of user core. Nov 1 00:55:00.335466 systemd[1]: Started session-15.scope. Nov 1 00:55:00.349030 kernel: audit: type=1006 audit(1761958500.310:516): pid=4866 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:55:00.310000 audit[4866]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa611f170 a2=3 a3=0 items=0 ppid=1 pid=4866 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:00.310000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:00.373450 kernel: audit: type=1300 audit(1761958500.310:516): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa611f170 a2=3 a3=0 items=0 ppid=1 pid=4866 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:00.374025 kernel: audit: type=1327 audit(1761958500.310:516): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:00.348000 audit[4866]: USER_START pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.390784 kernel: audit: type=1105 audit(1761958500.348:517): pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.350000 audit[4869]: CRED_ACQ pid=4869 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.405796 kernel: audit: type=1103 audit(1761958500.350:518): pid=4869 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.575022 sshd[4866]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:00.575000 audit[4866]: USER_END pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.586791 kernel: audit: type=1106 audit(1761958500.575:519): pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.584000 audit[4866]: CRED_DISP pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.588656 systemd[1]: sshd@16-144.126.212.254:22-139.178.89.65:52706.service: Deactivated successfully. Nov 1 00:55:00.589580 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:55:00.595204 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:55:00.595809 kernel: audit: type=1104 audit(1761958500.584:520): pid=4866 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:00.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-144.126.212.254:22-139.178.89.65:52706 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:00.596375 systemd-logind[1290]: Removed session 15. Nov 1 00:55:01.467265 env[1305]: time="2025-11-01T00:55:01.466915267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:55:01.790529 env[1305]: time="2025-11-01T00:55:01.790249470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:01.791545 env[1305]: time="2025-11-01T00:55:01.791416097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:55:01.791893 kubelet[2095]: E1101 00:55:01.791831 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:01.792307 kubelet[2095]: E1101 00:55:01.792280 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:01.792611 kubelet[2095]: E1101 00:55:01.792571 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79wrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-fmsxj_calico-apiserver(447c37d4-c1de-4035-a57b-b729047ea7fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:01.794501 kubelet[2095]: E1101 00:55:01.794429 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:55:02.466173 kubelet[2095]: E1101 00:55:02.466135 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:55:02.468413 env[1305]: time="2025-11-01T00:55:02.468155755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:55:02.771742 env[1305]: time="2025-11-01T00:55:02.771474446Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:02.772784 env[1305]: time="2025-11-01T00:55:02.772632287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:55:02.773117 kubelet[2095]: E1101 00:55:02.773068 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:02.773203 kubelet[2095]: E1101 00:55:02.773132 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:02.773718 kubelet[2095]: E1101 00:55:02.773598 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj6f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:02.775196 kubelet[2095]: E1101 00:55:02.775137 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:55:03.466249 kubelet[2095]: E1101 00:55:03.466203 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:55:03.468046 env[1305]: time="2025-11-01T00:55:03.468007671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:55:03.769246 env[1305]: time="2025-11-01T00:55:03.768981968Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:03.770352 env[1305]: time="2025-11-01T00:55:03.770222284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:55:03.770702 kubelet[2095]: E1101 00:55:03.770668 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:55:03.770889 kubelet[2095]: E1101 00:55:03.770865 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:55:03.771385 kubelet[2095]: E1101 00:55:03.771225 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmmkj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85b568d67d-z4c8c_calico-system(0979e255-e4e9-4664-a95e-5354a9f7d531): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:03.772707 kubelet[2095]: E1101 00:55:03.772668 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:55:05.467330 env[1305]: time="2025-11-01T00:55:05.467074225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:55:05.581280 systemd[1]: Started sshd@17-144.126.212.254:22-139.178.89.65:52712.service. Nov 1 00:55:05.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-144.126.212.254:22-139.178.89.65:52712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:05.583735 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:05.583839 kernel: audit: type=1130 audit(1761958505.581:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-144.126.212.254:22-139.178.89.65:52712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:05.683000 audit[4882]: USER_ACCT pid=4882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.685319 sshd[4882]: Accepted publickey for core from 139.178.89.65 port 52712 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:05.693983 kernel: audit: type=1101 audit(1761958505.683:523): pid=4882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.695271 sshd[4882]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:05.692000 audit[4882]: CRED_ACQ pid=4882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.708041 kernel: audit: type=1103 audit(1761958505.692:524): pid=4882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.708127 kernel: audit: type=1006 audit(1761958505.692:525): pid=4882 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:55:05.708157 kernel: audit: type=1300 audit(1761958505.692:525): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc25568e0 a2=3 a3=0 items=0 ppid=1 pid=4882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:05.692000 audit[4882]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc25568e0 a2=3 a3=0 items=0 ppid=1 pid=4882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:05.717388 kernel: audit: type=1327 audit(1761958505.692:525): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:05.692000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:05.721349 systemd[1]: Started session-16.scope. Nov 1 00:55:05.721907 systemd-logind[1290]: New session 16 of user core. Nov 1 00:55:05.731000 audit[4882]: USER_START pid=4882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.741834 kernel: audit: type=1105 audit(1761958505.731:526): pid=4882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.733000 audit[4885]: CRED_ACQ pid=4885 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.749852 kernel: audit: type=1103 audit(1761958505.733:527): pid=4885 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.778189 env[1305]: time="2025-11-01T00:55:05.778126939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:05.850548 env[1305]: time="2025-11-01T00:55:05.850465010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:55:05.851529 kubelet[2095]: E1101 00:55:05.850920 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:55:05.851529 kubelet[2095]: E1101 00:55:05.850972 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:55:05.851529 kubelet[2095]: E1101 00:55:05.851086 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:05.854270 env[1305]: time="2025-11-01T00:55:05.854173758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:55:05.959035 sshd[4882]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:05.959000 audit[4882]: USER_END pid=4882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.963509 systemd[1]: Started sshd@18-144.126.212.254:22-139.178.89.65:52722.service. Nov 1 00:55:05.971315 kernel: audit: type=1106 audit(1761958505.959:528): pid=4882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.966884 systemd[1]: sshd@17-144.126.212.254:22-139.178.89.65:52712.service: Deactivated successfully. Nov 1 00:55:05.967956 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:55:05.971638 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:55:05.974148 systemd-logind[1290]: Removed session 16. Nov 1 00:55:05.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-144.126.212.254:22-139.178.89.65:52722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:05.985791 kernel: audit: type=1130 audit(1761958505.959:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-144.126.212.254:22-139.178.89.65:52722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:05.959000 audit[4882]: CRED_DISP pid=4882 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:05.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-144.126.212.254:22-139.178.89.65:52712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:06.042000 audit[4892]: USER_ACCT pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.043547 sshd[4892]: Accepted publickey for core from 139.178.89.65 port 52722 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:06.044000 audit[4892]: CRED_ACQ pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.044000 audit[4892]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5d15a560 a2=3 a3=0 items=0 ppid=1 pid=4892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:06.044000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:06.045645 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:06.051400 systemd[1]: Started session-17.scope. Nov 1 00:55:06.051863 systemd-logind[1290]: New session 17 of user core. Nov 1 00:55:06.060000 audit[4892]: USER_START pid=4892 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.063000 audit[4897]: CRED_ACQ pid=4897 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.158385 env[1305]: time="2025-11-01T00:55:06.158182668Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:06.159132 env[1305]: time="2025-11-01T00:55:06.158996922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:55:06.159585 kubelet[2095]: E1101 00:55:06.159331 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:55:06.159585 kubelet[2095]: E1101 00:55:06.159382 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:55:06.159585 kubelet[2095]: E1101 00:55:06.159522 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgc7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-twt7m_calico-system(b08705e4-7a04-4c33-a8c8-a3f67298574d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:06.161041 kubelet[2095]: E1101 00:55:06.160974 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:55:06.360012 sshd[4892]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:06.363525 systemd[1]: Started sshd@19-144.126.212.254:22-139.178.89.65:57572.service. Nov 1 00:55:06.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-144.126.212.254:22-139.178.89.65:57572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:06.368000 audit[4892]: USER_END pid=4892 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.369000 audit[4892]: CRED_DISP pid=4892 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-144.126.212.254:22-139.178.89.65:52722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:06.380007 systemd[1]: sshd@18-144.126.212.254:22-139.178.89.65:52722.service: Deactivated successfully. Nov 1 00:55:06.380930 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:55:06.381452 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:55:06.382804 systemd-logind[1290]: Removed session 17. Nov 1 00:55:06.463735 sshd[4903]: Accepted publickey for core from 139.178.89.65 port 57572 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:06.462000 audit[4903]: USER_ACCT pid=4903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.464000 audit[4903]: CRED_ACQ pid=4903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.464000 audit[4903]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaa90d790 a2=3 a3=0 items=0 ppid=1 pid=4903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:06.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:06.467188 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:06.476283 systemd[1]: Started session-18.scope. Nov 1 00:55:06.476637 systemd-logind[1290]: New session 18 of user core. Nov 1 00:55:06.486000 audit[4903]: USER_START pid=4903 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:06.488000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.383032 sshd[4903]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:07.387298 systemd[1]: Started sshd@20-144.126.212.254:22-139.178.89.65:57580.service. Nov 1 00:55:07.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-144.126.212.254:22-139.178.89.65:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:07.389000 audit[4903]: USER_END pid=4903 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.389000 audit[4903]: CRED_DISP pid=4903 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.392291 systemd[1]: sshd@19-144.126.212.254:22-139.178.89.65:57572.service: Deactivated successfully. Nov 1 00:55:07.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-144.126.212.254:22-139.178.89.65:57572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:07.395423 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:55:07.395801 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:55:07.400241 systemd-logind[1290]: Removed session 18. Nov 1 00:55:07.470620 sshd[4917]: Accepted publickey for core from 139.178.89.65 port 57580 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:07.469000 audit[4917]: USER_ACCT pid=4917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.471000 audit[4917]: CRED_ACQ pid=4917 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.471000 audit[4917]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0cb41a20 a2=3 a3=0 items=0 ppid=1 pid=4917 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:07.471000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:07.473464 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:07.479443 systemd[1]: Started session-19.scope. Nov 1 00:55:07.479870 systemd-logind[1290]: New session 19 of user core. Nov 1 00:55:07.487000 audit[4917]: USER_START pid=4917 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.489000 audit[4924]: CRED_ACQ pid=4924 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:07.500000 audit[4922]: NETFILTER_CFG table=filter:129 family=2 entries=26 op=nft_register_rule pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:07.500000 audit[4922]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff6f04e270 a2=0 a3=7fff6f04e25c items=0 ppid=2198 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:07.500000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:07.505000 audit[4922]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=4922 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:07.505000 audit[4922]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff6f04e270 a2=0 a3=0 items=0 ppid=2198 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:07.505000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:07.519000 audit[4926]: NETFILTER_CFG table=filter:131 family=2 entries=38 op=nft_register_rule pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:07.519000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd15be0c00 a2=0 a3=7ffd15be0bec items=0 ppid=2198 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:07.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:07.523000 audit[4926]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:07.523000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd15be0c00 a2=0 a3=0 items=0 ppid=2198 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:07.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:08.040798 sshd[4917]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:08.044000 audit[4917]: USER_END pid=4917 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.044000 audit[4917]: CRED_DISP pid=4917 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-144.126.212.254:22-139.178.89.65:57592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:08.046898 systemd[1]: Started sshd@21-144.126.212.254:22-139.178.89.65:57592.service. Nov 1 00:55:08.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-144.126.212.254:22-139.178.89.65:57580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:08.050915 systemd[1]: sshd@20-144.126.212.254:22-139.178.89.65:57580.service: Deactivated successfully. Nov 1 00:55:08.052430 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:55:08.052982 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:55:08.055900 systemd-logind[1290]: Removed session 19. Nov 1 00:55:08.115000 audit[4932]: USER_ACCT pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.116523 sshd[4932]: Accepted publickey for core from 139.178.89.65 port 57592 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:08.116000 audit[4932]: CRED_ACQ pid=4932 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.117000 audit[4932]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf84f4c30 a2=3 a3=0 items=0 ppid=1 pid=4932 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:08.117000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:08.118150 sshd[4932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:08.124150 systemd[1]: Started session-20.scope. Nov 1 00:55:08.124368 systemd-logind[1290]: New session 20 of user core. Nov 1 00:55:08.131000 audit[4932]: USER_START pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.133000 audit[4937]: CRED_ACQ pid=4937 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.340490 sshd[4932]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:08.341000 audit[4932]: USER_END pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.341000 audit[4932]: CRED_DISP pid=4932 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:08.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-144.126.212.254:22-139.178.89.65:57592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:08.346427 systemd[1]: sshd@21-144.126.212.254:22-139.178.89.65:57592.service: Deactivated successfully. Nov 1 00:55:08.348385 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:55:08.349362 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:55:08.350636 systemd-logind[1290]: Removed session 20. Nov 1 00:55:09.469504 kubelet[2095]: E1101 00:55:09.469456 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:55:12.466746 kubelet[2095]: E1101 00:55:12.466707 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:55:13.346144 systemd[1]: Started sshd@22-144.126.212.254:22-139.178.89.65:57596.service. Nov 1 00:55:13.350534 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 00:55:13.350666 kernel: audit: type=1130 audit(1761958513.346:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.254:22-139.178.89.65:57596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:13.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.254:22-139.178.89.65:57596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:13.399000 audit[4949]: USER_ACCT pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.400784 sshd[4949]: Accepted publickey for core from 139.178.89.65 port 57596 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:13.409784 kernel: audit: type=1101 audit(1761958513.399:572): pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.410431 sshd[4949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:13.422079 kernel: audit: type=1103 audit(1761958513.408:573): pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.408000 audit[4949]: CRED_ACQ pid=4949 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.432428 kernel: audit: type=1006 audit(1761958513.408:574): pid=4949 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 00:55:13.435098 systemd-logind[1290]: New session 21 of user core. Nov 1 00:55:13.436615 systemd[1]: Started session-21.scope. Nov 1 00:55:13.408000 audit[4949]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfa6196a0 a2=3 a3=0 items=0 ppid=1 pid=4949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:13.453789 kernel: audit: type=1300 audit(1761958513.408:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfa6196a0 a2=3 a3=0 items=0 ppid=1 pid=4949 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:13.408000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:13.466406 kernel: audit: type=1327 audit(1761958513.408:574): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:13.466524 kernel: audit: type=1105 audit(1761958513.444:575): pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.444000 audit[4949]: USER_START pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.453000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.476880 kernel: audit: type=1103 audit(1761958513.453:576): pid=4952 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.735647 sshd[4949]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:13.736000 audit[4949]: USER_END pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.748808 kernel: audit: type=1106 audit(1761958513.736:577): pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.747855 systemd[1]: sshd@22-144.126.212.254:22-139.178.89.65:57596.service: Deactivated successfully. Nov 1 00:55:13.749903 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:55:13.750628 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:55:13.752499 systemd-logind[1290]: Removed session 21. Nov 1 00:55:13.736000 audit[4949]: CRED_DISP pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.766838 kernel: audit: type=1104 audit(1761958513.736:578): pid=4949 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:13.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-144.126.212.254:22-139.178.89.65:57596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:13.809000 audit[4962]: NETFILTER_CFG table=filter:133 family=2 entries=26 op=nft_register_rule pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:13.809000 audit[4962]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc47aee720 a2=0 a3=7ffc47aee70c items=0 ppid=2198 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:13.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:13.819000 audit[4962]: NETFILTER_CFG table=nat:134 family=2 entries=104 op=nft_register_chain pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:55:13.819000 audit[4962]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc47aee720 a2=0 a3=7ffc47aee70c items=0 ppid=2198 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:13.819000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:55:14.468642 kubelet[2095]: E1101 00:55:14.468597 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:55:16.469025 kubelet[2095]: E1101 00:55:16.468976 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:55:16.470311 kubelet[2095]: E1101 00:55:16.470272 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:55:18.470937 kubelet[2095]: E1101 00:55:18.470882 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:55:18.743128 systemd[1]: Started sshd@23-144.126.212.254:22-139.178.89.65:41138.service. Nov 1 00:55:18.752786 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:55:18.752942 kernel: audit: type=1130 audit(1761958518.743:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-144.126.212.254:22-139.178.89.65:41138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:18.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-144.126.212.254:22-139.178.89.65:41138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:18.826506 sshd[4963]: Accepted publickey for core from 139.178.89.65 port 41138 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:18.825000 audit[4963]: USER_ACCT pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.836779 kernel: audit: type=1101 audit(1761958518.825:583): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.837329 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:18.835000 audit[4963]: CRED_ACQ pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.846780 kernel: audit: type=1103 audit(1761958518.835:584): pid=4963 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.852772 kernel: audit: type=1006 audit(1761958518.835:585): pid=4963 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Nov 1 00:55:18.857540 systemd-logind[1290]: New session 22 of user core. Nov 1 00:55:18.865980 kernel: audit: type=1300 audit(1761958518.835:585): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2742ab30 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:18.835000 audit[4963]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2742ab30 a2=3 a3=0 items=0 ppid=1 pid=4963 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:18.858353 systemd[1]: Started session-22.scope. Nov 1 00:55:18.835000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:18.866000 audit[4963]: USER_START pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.882106 kernel: audit: type=1327 audit(1761958518.835:585): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:18.882172 kernel: audit: type=1105 audit(1761958518.866:586): pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.868000 audit[4966]: CRED_ACQ pid=4966 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:18.894678 kernel: audit: type=1103 audit(1761958518.868:587): pid=4966 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:19.082266 sshd[4963]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:19.083000 audit[4963]: USER_END pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:19.087531 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:55:19.089315 systemd[1]: sshd@23-144.126.212.254:22-139.178.89.65:41138.service: Deactivated successfully. Nov 1 00:55:19.090193 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:55:19.091838 systemd-logind[1290]: Removed session 22. Nov 1 00:55:19.092821 kernel: audit: type=1106 audit(1761958519.083:588): pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:19.083000 audit[4963]: CRED_DISP pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:19.100779 kernel: audit: type=1104 audit(1761958519.083:589): pid=4963 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:19.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-144.126.212.254:22-139.178.89.65:41138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:20.468480 kubelet[2095]: E1101 00:55:20.468438 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:55:24.087307 systemd[1]: Started sshd@24-144.126.212.254:22-139.178.89.65:41140.service. Nov 1 00:55:24.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-144.126.212.254:22-139.178.89.65:41140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:24.089862 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:24.089943 kernel: audit: type=1130 audit(1761958524.087:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-144.126.212.254:22-139.178.89.65:41140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:24.153000 audit[4977]: USER_ACCT pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.158035 sshd[4977]: Accepted publickey for core from 139.178.89.65 port 41140 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:24.165816 kernel: audit: type=1101 audit(1761958524.153:592): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.166808 sshd[4977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:24.165000 audit[4977]: CRED_ACQ pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.178015 kernel: audit: type=1103 audit(1761958524.165:593): pid=4977 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.184808 kernel: audit: type=1006 audit(1761958524.165:594): pid=4977 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 00:55:24.165000 audit[4977]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc148ba710 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:24.193950 kernel: audit: type=1300 audit(1761958524.165:594): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc148ba710 a2=3 a3=0 items=0 ppid=1 pid=4977 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:24.165000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:24.197963 kernel: audit: type=1327 audit(1761958524.165:594): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:24.202577 systemd-logind[1290]: New session 23 of user core. Nov 1 00:55:24.203828 systemd[1]: Started session-23.scope. Nov 1 00:55:24.216000 audit[4977]: USER_START pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.227910 kernel: audit: type=1105 audit(1761958524.216:595): pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.227000 audit[4980]: CRED_ACQ pid=4980 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.243857 kernel: audit: type=1103 audit(1761958524.227:596): pid=4980 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.578869 sshd[4977]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:24.579000 audit[4977]: USER_END pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.589799 kernel: audit: type=1106 audit(1761958524.579:597): pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.589000 audit[4977]: CRED_DISP pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.592101 systemd[1]: sshd@24-144.126.212.254:22-139.178.89.65:41140.service: Deactivated successfully. Nov 1 00:55:24.593826 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:55:24.594378 systemd-logind[1290]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:55:24.595567 systemd-logind[1290]: Removed session 23. Nov 1 00:55:24.599777 kernel: audit: type=1104 audit(1761958524.589:598): pid=4977 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:24.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-144.126.212.254:22-139.178.89.65:41140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:26.482293 kubelet[2095]: E1101 00:55:26.481488 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:55:26.483894 kubelet[2095]: E1101 00:55:26.481664 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:55:28.468683 kubelet[2095]: E1101 00:55:28.468644 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8" Nov 1 00:55:29.467193 kubelet[2095]: E1101 00:55:29.467144 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:55:29.581730 systemd[1]: Started sshd@25-144.126.212.254:22-139.178.89.65:56366.service. Nov 1 00:55:29.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-144.126.212.254:22-139.178.89.65:56366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:29.585437 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:29.585518 kernel: audit: type=1130 audit(1761958529.582:600): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-144.126.212.254:22-139.178.89.65:56366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:29.656000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.658023 sshd[5010]: Accepted publickey for core from 139.178.89.65 port 56366 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:29.666806 kernel: audit: type=1101 audit(1761958529.656:601): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.666000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.674992 kernel: audit: type=1103 audit(1761958529.666:602): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.676089 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:29.690283 kernel: audit: type=1006 audit(1761958529.666:603): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 00:55:29.690397 kernel: audit: type=1300 audit(1761958529.666:603): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd13a28c10 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:29.666000 audit[5010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd13a28c10 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:29.666000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:29.693828 kernel: audit: type=1327 audit(1761958529.666:603): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:29.696987 systemd[1]: Started session-24.scope. Nov 1 00:55:29.697178 systemd-logind[1290]: New session 24 of user core. Nov 1 00:55:29.702000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.717900 kernel: audit: type=1105 audit(1761958529.702:604): pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.717000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:29.733851 kernel: audit: type=1103 audit(1761958529.717:605): pid=5013 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:30.075943 sshd[5010]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:30.076000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:30.079507 systemd[1]: sshd@25-144.126.212.254:22-139.178.89.65:56366.service: Deactivated successfully. Nov 1 00:55:30.080903 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:55:30.081452 systemd-logind[1290]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:55:30.082511 systemd-logind[1290]: Removed session 24. Nov 1 00:55:30.087783 kernel: audit: type=1106 audit(1761958530.076:606): pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:30.077000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:30.095780 kernel: audit: type=1104 audit(1761958530.077:607): pid=5010 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:30.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-144.126.212.254:22-139.178.89.65:56366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:31.467156 kubelet[2095]: E1101 00:55:31.467103 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:55:33.467040 kubelet[2095]: E1101 00:55:33.466992 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6dfb57dc84-knf65" podUID="0234b74a-300a-4772-b752-16560b6b9a9c" Nov 1 00:55:35.083486 systemd[1]: Started sshd@26-144.126.212.254:22-139.178.89.65:56370.service. Nov 1 00:55:35.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-144.126.212.254:22-139.178.89.65:56370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:35.089775 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:35.089878 kernel: audit: type=1130 audit(1761958535.083:609): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-144.126.212.254:22-139.178.89.65:56370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:35.139000 audit[5023]: USER_ACCT pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.140435 sshd[5023]: Accepted publickey for core from 139.178.89.65 port 56370 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:35.148825 kernel: audit: type=1101 audit(1761958535.139:610): pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.149099 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:35.147000 audit[5023]: CRED_ACQ pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.160805 kernel: audit: type=1103 audit(1761958535.147:611): pid=5023 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.147000 audit[5023]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1a9678e0 a2=3 a3=0 items=0 ppid=1 pid=5023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:35.169410 systemd[1]: Started session-25.scope. Nov 1 00:55:35.170877 systemd-logind[1290]: New session 25 of user core. Nov 1 00:55:35.174415 kernel: audit: type=1006 audit(1761958535.147:612): pid=5023 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 00:55:35.174499 kernel: audit: type=1300 audit(1761958535.147:612): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1a9678e0 a2=3 a3=0 items=0 ppid=1 pid=5023 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:35.147000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:35.182476 kernel: audit: type=1327 audit(1761958535.147:612): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:35.183000 audit[5023]: USER_START pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.193843 kernel: audit: type=1105 audit(1761958535.183:613): pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.193000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.206035 kernel: audit: type=1103 audit(1761958535.193:614): pid=5026 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.369349 sshd[5023]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:35.370000 audit[5023]: USER_END pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.372420 systemd[1]: sshd@26-144.126.212.254:22-139.178.89.65:56370.service: Deactivated successfully. Nov 1 00:55:35.373250 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:55:35.380093 kernel: audit: type=1106 audit(1761958535.370:615): pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.370000 audit[5023]: CRED_DISP pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.383736 systemd-logind[1290]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:55:35.388792 kernel: audit: type=1104 audit(1761958535.370:616): pid=5023 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:35.390301 systemd-logind[1290]: Removed session 25. Nov 1 00:55:35.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-144.126.212.254:22-139.178.89.65:56370 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:37.470104 kubelet[2095]: E1101 00:55:37.470047 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:55:38.466375 kubelet[2095]: E1101 00:55:38.466336 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:55:39.475416 env[1305]: time="2025-11-01T00:55:39.475367667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:55:39.772573 env[1305]: time="2025-11-01T00:55:39.772402914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:39.774073 env[1305]: time="2025-11-01T00:55:39.773982858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:55:39.775933 kubelet[2095]: E1101 00:55:39.775877 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:55:39.776377 kubelet[2095]: E1101 00:55:39.776350 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:55:39.776637 kubelet[2095]: E1101 00:55:39.776593 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvfdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j9dnh_calico-system(acf47117-3eb1-4aa3-89a4-bc9fecdad703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:39.778977 kubelet[2095]: E1101 00:55:39.778930 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j9dnh" podUID="acf47117-3eb1-4aa3-89a4-bc9fecdad703" Nov 1 00:55:40.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-144.126.212.254:22-139.178.89.65:55266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:40.372538 systemd[1]: Started sshd@27-144.126.212.254:22-139.178.89.65:55266.service. Nov 1 00:55:40.374387 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:55:40.374495 kernel: audit: type=1130 audit(1761958540.371:618): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-144.126.212.254:22-139.178.89.65:55266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:40.434000 audit[5044]: USER_ACCT pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.436297 sshd[5044]: Accepted publickey for core from 139.178.89.65 port 55266 ssh2: RSA SHA256:bQOwnZoRZNmgRHdcvbYhT2IlOX5E1Dxtpq66cFKwaFs Nov 1 00:55:40.443861 kernel: audit: type=1101 audit(1761958540.434:619): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.445305 sshd[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:55:40.444000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.451677 systemd[1]: Started session-26.scope. Nov 1 00:55:40.453121 systemd-logind[1290]: New session 26 of user core. Nov 1 00:55:40.462210 kernel: audit: type=1103 audit(1761958540.444:620): pid=5044 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.462247 kernel: audit: type=1006 audit(1761958540.444:621): pid=5044 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 00:55:40.462269 kernel: audit: type=1300 audit(1761958540.444:621): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc40a29a0 a2=3 a3=0 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:40.444000 audit[5044]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc40a29a0 a2=3 a3=0 items=0 ppid=1 pid=5044 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:55:40.444000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:40.479772 kernel: audit: type=1327 audit(1761958540.444:621): proctitle=737368643A20636F7265205B707269765D Nov 1 00:55:40.482000 audit[5044]: USER_START pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.491776 kernel: audit: type=1105 audit(1761958540.482:622): pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.492000 audit[5047]: CRED_ACQ pid=5047 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.500781 kernel: audit: type=1103 audit(1761958540.492:623): pid=5047 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.712607 sshd[5044]: pam_unix(sshd:session): session closed for user core Nov 1 00:55:40.713000 audit[5044]: USER_END pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.716564 systemd[1]: sshd@27-144.126.212.254:22-139.178.89.65:55266.service: Deactivated successfully. Nov 1 00:55:40.717501 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:55:40.723832 kernel: audit: type=1106 audit(1761958540.713:624): pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.724143 systemd-logind[1290]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:55:40.725560 systemd-logind[1290]: Removed session 26. Nov 1 00:55:40.713000 audit[5044]: CRED_DISP pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.738224 kernel: audit: type=1104 audit(1761958540.713:625): pid=5044 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 00:55:40.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-144.126.212.254:22-139.178.89.65:55266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:55:41.467118 kubelet[2095]: E1101 00:55:41.467064 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-fmsxj" podUID="447c37d4-c1de-4035-a57b-b729047ea7fb" Nov 1 00:55:42.467228 kubelet[2095]: E1101 00:55:42.467190 2095 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Nov 1 00:55:42.468368 kubelet[2095]: E1101 00:55:42.468338 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85b568d67d-z4c8c" podUID="0979e255-e4e9-4664-a95e-5354a9f7d531" Nov 1 00:55:43.467726 env[1305]: time="2025-11-01T00:55:43.467677148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:55:43.468201 kubelet[2095]: E1101 00:55:43.468163 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-twt7m" podUID="b08705e4-7a04-4c33-a8c8-a3f67298574d" Nov 1 00:55:43.764556 env[1305]: time="2025-11-01T00:55:43.764273277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:55:43.765344 env[1305]: time="2025-11-01T00:55:43.765226276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:55:43.765631 kubelet[2095]: E1101 00:55:43.765595 2095 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:43.765799 kubelet[2095]: E1101 00:55:43.765743 2095 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:55:43.766102 kubelet[2095]: E1101 00:55:43.766060 2095 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj6f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f668d4ccf-gzvhz_calico-apiserver(0aeb6ff7-2d7d-423c-8068-1607bda1ebe8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:55:43.767439 kubelet[2095]: E1101 00:55:43.767404 2095 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f668d4ccf-gzvhz" podUID="0aeb6ff7-2d7d-423c-8068-1607bda1ebe8"