Nov 1 00:44:50.281011 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:44:50.281048 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:44:50.281067 kernel: BIOS-provided physical RAM map: Nov 1 00:44:50.281083 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:44:50.281100 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:44:50.281116 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:44:50.281136 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 1 00:44:50.281155 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 1 00:44:50.281175 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:44:50.281192 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:44:50.281208 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:44:50.281225 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:44:50.281241 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:44:50.281258 kernel: NX (Execute Disable) protection: active Nov 1 00:44:50.281280 kernel: SMBIOS 2.8 present. Nov 1 00:44:50.281297 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 1 00:44:50.281315 kernel: Hypervisor detected: KVM Nov 1 00:44:50.281331 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:44:50.281348 kernel: kvm-clock: cpu 0, msr 531a0001, primary cpu clock Nov 1 00:44:50.281364 kernel: kvm-clock: using sched offset of 2948045007 cycles Nov 1 00:44:50.281382 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:44:50.281399 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:44:50.281415 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:44:50.281427 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:44:50.281435 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 1 00:44:50.281445 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:44:50.281458 kernel: Using GB pages for direct mapping Nov 1 00:44:50.281466 kernel: ACPI: Early table checksum verification disabled Nov 1 00:44:50.281474 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 1 00:44:50.281483 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281491 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281500 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281512 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 1 00:44:50.281521 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281533 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281545 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281553 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:44:50.281562 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 1 00:44:50.281571 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 1 00:44:50.281579 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 1 00:44:50.281594 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 1 00:44:50.281603 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 1 00:44:50.281618 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 1 00:44:50.281628 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 1 00:44:50.281636 kernel: No NUMA configuration found Nov 1 00:44:50.281645 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 1 00:44:50.281657 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Nov 1 00:44:50.281673 kernel: Zone ranges: Nov 1 00:44:50.281690 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:44:50.281699 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 1 00:44:50.281708 kernel: Normal empty Nov 1 00:44:50.281717 kernel: Movable zone start for each node Nov 1 00:44:50.281733 kernel: Early memory node ranges Nov 1 00:44:50.281743 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:44:50.281752 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 1 00:44:50.281767 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 1 00:44:50.281780 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:44:50.281789 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:44:50.281798 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:44:50.281807 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:44:50.281816 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:44:50.283783 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:44:50.283802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:44:50.283845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:44:50.283865 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:44:50.283888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:44:50.283908 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:44:50.283926 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:44:50.283945 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:44:50.283960 kernel: TSC deadline timer available Nov 1 00:44:50.283975 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Nov 1 00:44:50.283986 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:44:50.283995 kernel: kvm-guest: setup PV sched yield Nov 1 00:44:50.284022 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:44:50.284045 kernel: Booting paravirtualized kernel on KVM Nov 1 00:44:50.284060 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:44:50.284069 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:44:50.284085 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Nov 1 00:44:50.284094 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Nov 1 00:44:50.284103 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:44:50.284112 kernel: kvm-guest: setup async PF for cpu 0 Nov 1 00:44:50.284121 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Nov 1 00:44:50.284135 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:44:50.284154 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:44:50.284167 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Nov 1 00:44:50.284176 kernel: Policy zone: DMA32 Nov 1 00:44:50.284186 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:44:50.284197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:44:50.284205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:44:50.284214 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:44:50.284223 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:44:50.284245 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 134796K reserved, 0K cma-reserved) Nov 1 00:44:50.284265 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:44:50.284283 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:44:50.284302 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:44:50.284320 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:44:50.284340 kernel: rcu: RCU event tracing is enabled. Nov 1 00:44:50.284359 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:44:50.284375 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:44:50.284385 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:44:50.284398 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:44:50.284413 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:44:50.284431 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:44:50.284441 kernel: random: crng init done Nov 1 00:44:50.284455 kernel: Console: colour VGA+ 80x25 Nov 1 00:44:50.284466 kernel: printk: console [ttyS0] enabled Nov 1 00:44:50.284475 kernel: ACPI: Core revision 20210730 Nov 1 00:44:50.284485 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:44:50.284494 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:44:50.284516 kernel: x2apic enabled Nov 1 00:44:50.284534 kernel: Switched APIC routing to physical x2apic. Nov 1 00:44:50.284546 kernel: kvm-guest: setup PV IPIs Nov 1 00:44:50.284555 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:44:50.284564 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:44:50.284578 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:44:50.284591 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:44:50.284602 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:44:50.284617 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:44:50.284638 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:44:50.284647 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:44:50.284657 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:44:50.284669 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:44:50.284678 kernel: active return thunk: retbleed_return_thunk Nov 1 00:44:50.284688 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:44:50.284697 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:44:50.284715 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:44:50.284732 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:44:50.284748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:44:50.284764 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:44:50.284784 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:44:50.284801 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:44:50.284849 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:44:50.284864 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:44:50.284879 kernel: LSM: Security Framework initializing Nov 1 00:44:50.284897 kernel: SELinux: Initializing. Nov 1 00:44:50.284911 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:44:50.284921 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:44:50.284934 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:44:50.284948 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:44:50.284962 kernel: ... version: 0 Nov 1 00:44:50.284974 kernel: ... bit width: 48 Nov 1 00:44:50.284983 kernel: ... generic registers: 6 Nov 1 00:44:50.284991 kernel: ... value mask: 0000ffffffffffff Nov 1 00:44:50.284999 kernel: ... max period: 00007fffffffffff Nov 1 00:44:50.285010 kernel: ... fixed-purpose events: 0 Nov 1 00:44:50.285018 kernel: ... event mask: 000000000000003f Nov 1 00:44:50.285026 kernel: signal: max sigframe size: 1776 Nov 1 00:44:50.285034 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:44:50.285042 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:44:50.285050 kernel: x86: Booting SMP configuration: Nov 1 00:44:50.285058 kernel: .... node #0, CPUs: #1 Nov 1 00:44:50.285066 kernel: kvm-clock: cpu 1, msr 531a0041, secondary cpu clock Nov 1 00:44:50.285074 kernel: kvm-guest: setup async PF for cpu 1 Nov 1 00:44:50.285083 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Nov 1 00:44:50.285092 kernel: #2 Nov 1 00:44:50.285100 kernel: kvm-clock: cpu 2, msr 531a0081, secondary cpu clock Nov 1 00:44:50.285108 kernel: kvm-guest: setup async PF for cpu 2 Nov 1 00:44:50.285116 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Nov 1 00:44:50.285124 kernel: #3 Nov 1 00:44:50.285132 kernel: kvm-clock: cpu 3, msr 531a00c1, secondary cpu clock Nov 1 00:44:50.285140 kernel: kvm-guest: setup async PF for cpu 3 Nov 1 00:44:50.285148 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Nov 1 00:44:50.285157 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:44:50.285165 kernel: smpboot: Max logical packages: 1 Nov 1 00:44:50.285173 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:44:50.285181 kernel: devtmpfs: initialized Nov 1 00:44:50.285189 kernel: x86/mm: Memory block size: 128MB Nov 1 00:44:50.285197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:44:50.285206 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:44:50.285213 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:44:50.285221 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:44:50.285231 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:44:50.285239 kernel: audit: type=2000 audit(1761957889.687:1): state=initialized audit_enabled=0 res=1 Nov 1 00:44:50.285247 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:44:50.285255 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:44:50.285263 kernel: cpuidle: using governor menu Nov 1 00:44:50.285270 kernel: ACPI: bus type PCI registered Nov 1 00:44:50.285279 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:44:50.285286 kernel: dca service started, version 1.12.1 Nov 1 00:44:50.285294 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:44:50.285304 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 00:44:50.285312 kernel: PCI: Using configuration type 1 for base access Nov 1 00:44:50.285320 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:44:50.285328 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:44:50.285336 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:44:50.285344 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:44:50.285352 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:44:50.285360 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:44:50.285368 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:44:50.285377 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:44:50.285385 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:44:50.285393 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:44:50.285401 kernel: ACPI: Interpreter enabled Nov 1 00:44:50.285409 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:44:50.285417 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:44:50.285425 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:44:50.285433 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:44:50.285441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:44:50.285600 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:44:50.285684 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:44:50.285760 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:44:50.285769 kernel: PCI host bridge to bus 0000:00 Nov 1 00:44:50.285883 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:44:50.285955 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:44:50.286028 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:44:50.286109 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 1 00:44:50.286188 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:44:50.286270 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:44:50.286348 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:44:50.286545 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:44:50.286769 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 1 00:44:50.287016 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 1 00:44:50.287216 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 1 00:44:50.287426 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 1 00:44:50.287638 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:44:50.287930 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:44:50.288151 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 00:44:50.288370 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 1 00:44:50.288586 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 1 00:44:50.288813 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Nov 1 00:44:50.289028 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 00:44:50.289243 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 1 00:44:50.289450 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 1 00:44:50.289650 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:44:50.289751 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Nov 1 00:44:50.289900 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Nov 1 00:44:50.290028 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 1 00:44:50.290239 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 1 00:44:50.290462 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:44:50.290609 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:44:50.290759 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:44:50.290929 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Nov 1 00:44:50.291050 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Nov 1 00:44:50.291206 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:44:50.291326 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:44:50.291343 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:44:50.291353 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:44:50.291362 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:44:50.291371 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:44:50.291384 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:44:50.291393 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:44:50.291402 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:44:50.291411 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:44:50.291421 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:44:50.291438 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:44:50.291458 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:44:50.291477 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:44:50.291494 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:44:50.291509 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:44:50.291528 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:44:50.291547 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:44:50.291559 kernel: iommu: Default domain type: Translated Nov 1 00:44:50.291568 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:44:50.291728 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:44:50.291929 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:44:50.292138 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:44:50.292164 kernel: vgaarb: loaded Nov 1 00:44:50.292182 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:44:50.292201 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:44:50.292221 kernel: PTP clock support registered Nov 1 00:44:50.292239 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:44:50.292258 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:44:50.292277 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:44:50.292295 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 1 00:44:50.292314 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:44:50.292338 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:44:50.292356 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:44:50.292375 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:44:50.292395 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:44:50.292414 kernel: pnp: PnP ACPI init Nov 1 00:44:50.292634 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:44:50.292657 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:44:50.292677 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:44:50.292694 kernel: NET: Registered PF_INET protocol family Nov 1 00:44:50.292712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:44:50.292725 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:44:50.292735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:44:50.292752 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:44:50.292766 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:44:50.292781 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:44:50.292790 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:44:50.292800 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:44:50.292846 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:44:50.292866 kernel: NET: Registered PF_XDP protocol family Nov 1 00:44:50.293057 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:44:50.293173 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:44:50.293303 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:44:50.293441 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 1 00:44:50.293552 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:44:50.293690 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:44:50.293712 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:44:50.293736 kernel: Initialise system trusted keyrings Nov 1 00:44:50.293754 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:44:50.293773 kernel: Key type asymmetric registered Nov 1 00:44:50.293792 kernel: Asymmetric key parser 'x509' registered Nov 1 00:44:50.293810 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:44:50.293918 kernel: io scheduler mq-deadline registered Nov 1 00:44:50.293937 kernel: io scheduler kyber registered Nov 1 00:44:50.293956 kernel: io scheduler bfq registered Nov 1 00:44:50.293974 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:44:50.293999 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:44:50.294018 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:44:50.294038 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:44:50.294056 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:44:50.294076 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:44:50.294094 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:44:50.294114 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:44:50.294132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:44:50.294410 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:44:50.294439 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:44:50.294638 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:44:50.294858 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:44:49 UTC (1761957889) Nov 1 00:44:50.294963 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:44:50.294978 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:44:50.294989 kernel: Segment Routing with IPv6 Nov 1 00:44:50.294999 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:44:50.295010 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:44:50.295024 kernel: Key type dns_resolver registered Nov 1 00:44:50.295034 kernel: IPI shorthand broadcast: enabled Nov 1 00:44:50.295045 kernel: sched_clock: Marking stable (554605350, 190561413)->(860690361, -115523598) Nov 1 00:44:50.295055 kernel: registered taskstats version 1 Nov 1 00:44:50.295066 kernel: Loading compiled-in X.509 certificates Nov 1 00:44:50.295077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:44:50.295086 kernel: Key type .fscrypt registered Nov 1 00:44:50.295095 kernel: Key type fscrypt-provisioning registered Nov 1 00:44:50.295111 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:44:50.295133 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:44:50.295151 kernel: ima: No architecture policies found Nov 1 00:44:50.295169 kernel: clk: Disabling unused clocks Nov 1 00:44:50.295187 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:44:50.295204 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:44:50.295222 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:44:50.295240 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:44:50.295257 kernel: Run /init as init process Nov 1 00:44:50.295279 kernel: with arguments: Nov 1 00:44:50.295297 kernel: /init Nov 1 00:44:50.295314 kernel: with environment: Nov 1 00:44:50.295332 kernel: HOME=/ Nov 1 00:44:50.295351 kernel: TERM=linux Nov 1 00:44:50.295368 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:44:50.295390 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:44:50.295412 systemd[1]: Detected virtualization kvm. Nov 1 00:44:50.295436 systemd[1]: Detected architecture x86-64. Nov 1 00:44:50.295455 systemd[1]: Running in initrd. Nov 1 00:44:50.295474 systemd[1]: No hostname configured, using default hostname. Nov 1 00:44:50.295493 systemd[1]: Hostname set to . Nov 1 00:44:50.295514 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:44:50.295533 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:44:50.295554 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:44:50.295574 systemd[1]: Reached target cryptsetup.target. Nov 1 00:44:50.295594 systemd[1]: Reached target paths.target. Nov 1 00:44:50.295619 systemd[1]: Reached target slices.target. Nov 1 00:44:50.295653 systemd[1]: Reached target swap.target. Nov 1 00:44:50.295675 systemd[1]: Reached target timers.target. Nov 1 00:44:50.295695 systemd[1]: Listening on iscsid.socket. Nov 1 00:44:50.295716 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:44:50.295741 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:44:50.295761 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:44:50.295781 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:44:50.295801 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:44:50.295978 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:44:50.296000 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:44:50.296021 systemd[1]: Reached target sockets.target. Nov 1 00:44:50.296040 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:44:50.296060 systemd[1]: Finished network-cleanup.service. Nov 1 00:44:50.296087 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:44:50.296107 systemd[1]: Starting systemd-journald.service... Nov 1 00:44:50.296127 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:44:50.296147 systemd[1]: Starting systemd-resolved.service... Nov 1 00:44:50.296167 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:44:50.296187 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:44:50.296208 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:44:50.296227 kernel: audit: type=1130 audit(1761957890.293:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.296251 systemd[1]: Started systemd-resolved.service. Nov 1 00:44:50.296272 systemd-journald[197]: Journal started Nov 1 00:44:50.296350 systemd-journald[197]: Runtime Journal (/run/log/journal/da85435e52a64d87bb68f9262d2e4aed) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:44:50.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.223896 systemd-modules-load[198]: Inserted module 'overlay' Nov 1 00:44:50.277874 systemd-resolved[199]: Positive Trust Anchors: Nov 1 00:44:50.313152 systemd[1]: Started systemd-journald.service. Nov 1 00:44:50.313186 kernel: audit: type=1130 audit(1761957890.303:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.277889 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:44:50.321120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:44:50.321150 kernel: audit: type=1130 audit(1761957890.313:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.277924 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:44:50.358005 kernel: Bridge firewalling registered Nov 1 00:44:50.358040 kernel: audit: type=1130 audit(1761957890.331:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.290429 systemd-resolved[199]: Defaulting to hostname 'linux'. Nov 1 00:44:50.315061 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:44:50.332357 systemd[1]: Reached target nss-lookup.target. Nov 1 00:44:50.358372 systemd-modules-load[198]: Inserted module 'br_netfilter' Nov 1 00:44:50.365684 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:44:50.366424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:44:50.391069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:44:50.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.405871 kernel: audit: type=1130 audit(1761957890.390:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.417515 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:44:50.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.421014 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:44:50.439009 kernel: audit: type=1130 audit(1761957890.419:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.439082 dracut-cmdline[216]: dracut-dracut-053 Nov 1 00:44:50.439082 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 00:44:50.439082 dracut-cmdline[216]: BEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:44:50.459159 kernel: SCSI subsystem initialized Nov 1 00:44:50.482039 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:44:50.482111 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:44:50.486685 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:44:50.490683 systemd-modules-load[198]: Inserted module 'dm_multipath' Nov 1 00:44:50.492126 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:44:50.506930 kernel: audit: type=1130 audit(1761957890.493:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.495233 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:44:50.510369 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:44:50.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.520867 kernel: audit: type=1130 audit(1761957890.513:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.525925 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:44:50.554409 kernel: iscsi: registered transport (tcp) Nov 1 00:44:50.587868 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:44:50.587955 kernel: QLogic iSCSI HBA Driver Nov 1 00:44:50.650787 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:44:50.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.674910 kernel: audit: type=1130 audit(1761957890.654:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.661043 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:44:50.793884 kernel: raid6: avx2x4 gen() 18900 MB/s Nov 1 00:44:50.809195 kernel: raid6: avx2x4 xor() 4226 MB/s Nov 1 00:44:50.826857 kernel: raid6: avx2x2 gen() 19734 MB/s Nov 1 00:44:50.845859 kernel: raid6: avx2x2 xor() 12441 MB/s Nov 1 00:44:50.865848 kernel: raid6: avx2x1 gen() 15895 MB/s Nov 1 00:44:50.886849 kernel: raid6: avx2x1 xor() 8781 MB/s Nov 1 00:44:50.905493 kernel: raid6: sse2x4 gen() 8935 MB/s Nov 1 00:44:50.924468 kernel: raid6: sse2x4 xor() 4171 MB/s Nov 1 00:44:50.939881 kernel: raid6: sse2x2 gen() 9039 MB/s Nov 1 00:44:50.957874 kernel: raid6: sse2x2 xor() 5974 MB/s Nov 1 00:44:50.975883 kernel: raid6: sse2x1 gen() 8116 MB/s Nov 1 00:44:50.994781 kernel: raid6: sse2x1 xor() 4954 MB/s Nov 1 00:44:50.994882 kernel: raid6: using algorithm avx2x2 gen() 19734 MB/s Nov 1 00:44:50.994896 kernel: raid6: .... xor() 12441 MB/s, rmw enabled Nov 1 00:44:50.997656 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:44:51.014873 kernel: xor: automatically using best checksumming function avx Nov 1 00:44:51.171206 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:44:51.188921 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:44:51.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:51.200000 audit: BPF prog-id=7 op=LOAD Nov 1 00:44:51.200000 audit: BPF prog-id=8 op=LOAD Nov 1 00:44:51.204329 systemd[1]: Starting systemd-udevd.service... Nov 1 00:44:51.237034 systemd-udevd[401]: Using default interface naming scheme 'v252'. Nov 1 00:44:51.243600 systemd[1]: Started systemd-udevd.service. Nov 1 00:44:51.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:51.251853 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:44:51.272767 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Nov 1 00:44:51.352637 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:44:51.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:51.367767 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:44:51.428461 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:44:51.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:51.528467 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:44:51.530697 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:44:51.530716 kernel: GPT:9289727 != 19775487 Nov 1 00:44:51.530730 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:44:51.530743 kernel: GPT:9289727 != 19775487 Nov 1 00:44:51.530761 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:44:51.530773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:44:51.566908 kernel: libata version 3.00 loaded. Nov 1 00:44:51.570190 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:44:51.680273 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:44:51.680306 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:44:51.680319 kernel: AES CTR mode by8 optimization enabled Nov 1 00:44:51.680343 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (451) Nov 1 00:44:51.680356 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:44:51.680565 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:44:51.680581 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:44:51.680701 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:44:51.680875 kernel: scsi host0: ahci Nov 1 00:44:51.681032 kernel: scsi host1: ahci Nov 1 00:44:51.681158 kernel: scsi host2: ahci Nov 1 00:44:51.681281 kernel: scsi host3: ahci Nov 1 00:44:51.681412 kernel: scsi host4: ahci Nov 1 00:44:51.681567 kernel: scsi host5: ahci Nov 1 00:44:51.681690 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Nov 1 00:44:51.681703 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Nov 1 00:44:51.681716 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Nov 1 00:44:51.681731 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Nov 1 00:44:51.681743 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Nov 1 00:44:51.681755 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Nov 1 00:44:51.684012 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:44:51.701667 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:44:51.710369 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:44:51.719554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:44:51.728243 systemd[1]: Starting disk-uuid.service... Nov 1 00:44:51.743935 disk-uuid[521]: Primary Header is updated. Nov 1 00:44:51.743935 disk-uuid[521]: Secondary Entries is updated. Nov 1 00:44:51.743935 disk-uuid[521]: Secondary Header is updated. Nov 1 00:44:51.785979 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:44:51.932422 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:44:51.932508 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:44:51.932523 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:44:51.932535 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:44:51.932860 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:44:51.934865 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:44:51.936869 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:44:51.938464 kernel: ata3.00: applying bridge limits Nov 1 00:44:51.939709 kernel: ata3.00: configured for UDMA/100 Nov 1 00:44:51.942842 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:44:51.977297 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:44:51.994664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:44:51.994686 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:44:52.790770 disk-uuid[522]: The operation has completed successfully. Nov 1 00:44:52.794188 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:44:52.873259 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:44:52.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:52.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:52.873339 systemd[1]: Finished disk-uuid.service. Nov 1 00:44:52.875415 systemd[1]: Starting verity-setup.service... Nov 1 00:44:52.888846 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:44:52.908860 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:44:52.911463 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:44:52.913047 systemd[1]: Finished verity-setup.service. Nov 1 00:44:52.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:52.978842 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:44:52.978970 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:44:52.980342 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:44:52.980955 systemd[1]: Starting ignition-setup.service... Nov 1 00:44:52.994067 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:44:52.994161 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:44:52.994238 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:44:52.982950 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:44:53.003408 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:44:53.012654 systemd[1]: Finished ignition-setup.service. Nov 1 00:44:53.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.016298 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:44:53.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.048847 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:44:53.067586 systemd[1]: Starting systemd-networkd.service... Nov 1 00:44:53.066000 audit: BPF prog-id=9 op=LOAD Nov 1 00:44:53.094277 systemd-networkd[712]: lo: Link UP Nov 1 00:44:53.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.094289 systemd-networkd[712]: lo: Gained carrier Nov 1 00:44:53.094731 systemd-networkd[712]: Enumeration completed Nov 1 00:44:53.094849 systemd[1]: Started systemd-networkd.service. Nov 1 00:44:53.094981 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:44:53.098136 systemd-networkd[712]: eth0: Link UP Nov 1 00:44:53.098140 systemd-networkd[712]: eth0: Gained carrier Nov 1 00:44:53.100884 systemd[1]: Reached target network.target. Nov 1 00:44:53.105948 systemd[1]: Starting iscsiuio.service... Nov 1 00:44:53.157977 ignition[654]: Ignition 2.14.0 Nov 1 00:44:53.157988 ignition[654]: Stage: fetch-offline Nov 1 00:44:53.158052 ignition[654]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:53.158065 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:53.158196 ignition[654]: parsed url from cmdline: "" Nov 1 00:44:53.158202 ignition[654]: no config URL provided Nov 1 00:44:53.158208 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:44:53.158217 ignition[654]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:44:53.158244 ignition[654]: op(1): [started] loading QEMU firmware config module Nov 1 00:44:53.158250 ignition[654]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:44:53.169301 ignition[654]: op(1): [finished] loading QEMU firmware config module Nov 1 00:44:53.172608 systemd[1]: Started iscsiuio.service. Nov 1 00:44:53.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.174911 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:44:53.177076 systemd[1]: Starting iscsid.service... Nov 1 00:44:53.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.185513 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:44:53.185513 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:44:53.185513 iscsid[725]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:44:53.185513 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:44:53.185513 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:44:53.185513 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:44:53.185513 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:44:53.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.181025 systemd[1]: Started iscsid.service. Nov 1 00:44:53.183950 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:44:53.194502 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:44:53.196577 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:44:53.203064 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:44:53.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.207104 systemd[1]: Reached target remote-fs.target. Nov 1 00:44:53.213617 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:44:53.227855 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:44:53.295991 ignition[654]: parsing config with SHA512: 2347b848b0ae9ba42455f0b8014290d06419c977f914ad063859fee7b1bc0e97a92ff8fcfbbe94946fef1ac0a0221f8086c9785c3bd47f4ce9a7418cff672a01 Nov 1 00:44:53.333203 unknown[654]: fetched base config from "system" Nov 1 00:44:53.333495 unknown[654]: fetched user config from "qemu" Nov 1 00:44:53.334119 ignition[654]: fetch-offline: fetch-offline passed Nov 1 00:44:53.334180 ignition[654]: Ignition finished successfully Nov 1 00:44:53.339138 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:44:53.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.340656 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:44:53.341559 systemd[1]: Starting ignition-kargs.service... Nov 1 00:44:53.363540 ignition[739]: Ignition 2.14.0 Nov 1 00:44:53.363551 ignition[739]: Stage: kargs Nov 1 00:44:53.363658 ignition[739]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:53.363667 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:53.365081 ignition[739]: kargs: kargs passed Nov 1 00:44:53.365127 ignition[739]: Ignition finished successfully Nov 1 00:44:53.372234 systemd[1]: Finished ignition-kargs.service. Nov 1 00:44:53.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.373508 systemd[1]: Starting ignition-disks.service... Nov 1 00:44:53.381974 ignition[745]: Ignition 2.14.0 Nov 1 00:44:53.381986 ignition[745]: Stage: disks Nov 1 00:44:53.382097 ignition[745]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:53.382107 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:53.383124 ignition[745]: disks: disks passed Nov 1 00:44:53.383167 ignition[745]: Ignition finished successfully Nov 1 00:44:53.391974 systemd[1]: Finished ignition-disks.service. Nov 1 00:44:53.392156 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:44:53.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.395758 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:44:53.398422 systemd[1]: Reached target local-fs.target. Nov 1 00:44:53.400800 systemd[1]: Reached target sysinit.target. Nov 1 00:44:53.401112 systemd[1]: Reached target basic.target. Nov 1 00:44:53.406244 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:44:53.416781 systemd-fsck[753]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:44:53.422413 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:44:53.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.423229 systemd[1]: Mounting sysroot.mount... Nov 1 00:44:53.431110 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:44:53.431797 systemd[1]: Mounted sysroot.mount. Nov 1 00:44:53.435219 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:44:53.438958 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:44:53.441607 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:44:53.441642 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:44:53.441660 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:44:53.449993 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:44:53.452995 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:44:53.457915 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:44:53.462912 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:44:53.468065 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:44:53.472431 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:44:53.507036 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:44:53.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.511508 systemd[1]: Starting ignition-mount.service... Nov 1 00:44:53.515220 systemd[1]: Starting sysroot-boot.service... Nov 1 00:44:53.518950 bash[804]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:44:53.631000 ignition[805]: INFO : Ignition 2.14.0 Nov 1 00:44:53.631000 ignition[805]: INFO : Stage: mount Nov 1 00:44:53.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.634964 ignition[805]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:53.634964 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:53.634964 ignition[805]: INFO : mount: mount passed Nov 1 00:44:53.634964 ignition[805]: INFO : Ignition finished successfully Nov 1 00:44:53.631203 systemd[1]: Finished sysroot-boot.service. Nov 1 00:44:53.642010 systemd[1]: Finished ignition-mount.service. Nov 1 00:44:53.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:53.919958 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:44:53.960248 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Nov 1 00:44:53.963834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:44:53.963856 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:44:53.963866 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:44:53.968868 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:44:53.969801 systemd[1]: Starting ignition-files.service... Nov 1 00:44:53.989550 ignition[833]: INFO : Ignition 2.14.0 Nov 1 00:44:53.989550 ignition[833]: INFO : Stage: files Nov 1 00:44:53.992620 ignition[833]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:53.992620 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:53.996909 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:44:53.999632 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:44:53.999632 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:44:54.006009 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:44:54.006009 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:44:54.006009 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:44:54.006009 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:44:54.006009 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:44:54.006009 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:44:54.006009 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:44:54.002103 unknown[833]: wrote ssh authorized keys file for user: core Nov 1 00:44:54.049137 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:44:54.245293 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:44:54.245293 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:44:54.252036 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:44:54.592040 systemd-networkd[712]: eth0: Gained IPv6LL Nov 1 00:44:54.672096 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:44:55.904026 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:44:55.904026 ignition[833]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:44:55.919726 ignition[833]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:44:56.040988 ignition[833]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:44:56.047335 ignition[833]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:44:56.047335 ignition[833]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:44:56.047335 ignition[833]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:44:56.047335 ignition[833]: INFO : files: files passed Nov 1 00:44:56.047335 ignition[833]: INFO : Ignition finished successfully Nov 1 00:44:56.061746 systemd[1]: Finished ignition-files.service. Nov 1 00:44:56.077240 kernel: kauditd_printk_skb: 24 callbacks suppressed Nov 1 00:44:56.077270 kernel: audit: type=1130 audit(1761957896.064:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.067655 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:44:56.079184 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:44:56.086270 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:44:56.080961 systemd[1]: Starting ignition-quench.service... Nov 1 00:44:56.097995 initrd-setup-root-after-ignition[861]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:44:56.108836 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:44:56.122578 kernel: audit: type=1130 audit(1761957896.111:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.112209 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:44:56.112304 systemd[1]: Finished ignition-quench.service. Nov 1 00:44:56.140404 kernel: audit: type=1130 audit(1761957896.125:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.140448 kernel: audit: type=1131 audit(1761957896.125:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.126314 systemd[1]: Reached target ignition-complete.target. Nov 1 00:44:56.144671 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:44:56.168295 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:44:56.168415 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:44:56.188347 kernel: audit: type=1130 audit(1761957896.171:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.188396 kernel: audit: type=1131 audit(1761957896.171:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.172282 systemd[1]: Reached target initrd-fs.target. Nov 1 00:44:56.191622 systemd[1]: Reached target initrd.target. Nov 1 00:44:56.195023 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:44:56.200544 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:44:56.216260 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:44:56.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.221025 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:44:56.230803 kernel: audit: type=1130 audit(1761957896.219:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.241994 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:44:56.245307 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:44:56.249021 systemd[1]: Stopped target timers.target. Nov 1 00:44:56.252604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:44:56.252781 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:44:56.264058 kernel: audit: type=1131 audit(1761957896.255:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.256414 systemd[1]: Stopped target initrd.target. Nov 1 00:44:56.267189 systemd[1]: Stopped target basic.target. Nov 1 00:44:56.270448 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:44:56.274124 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:44:56.277948 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:44:56.283151 systemd[1]: Stopped target remote-fs.target. Nov 1 00:44:56.286187 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:44:56.290630 systemd[1]: Stopped target sysinit.target. Nov 1 00:44:56.294029 systemd[1]: Stopped target local-fs.target. Nov 1 00:44:56.297196 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:44:56.300532 systemd[1]: Stopped target swap.target. Nov 1 00:44:56.303460 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:44:56.305447 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:44:56.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.308802 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:44:56.317421 kernel: audit: type=1131 audit(1761957896.308:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.317480 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:44:56.319289 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:44:56.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.322622 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:44:56.329584 kernel: audit: type=1131 audit(1761957896.322:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.322763 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:44:56.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.333092 systemd[1]: Stopped target paths.target. Nov 1 00:44:56.336001 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:44:56.341900 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:44:56.345716 systemd[1]: Stopped target slices.target. Nov 1 00:44:56.348924 systemd[1]: Stopped target sockets.target. Nov 1 00:44:56.351892 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:44:56.353802 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:44:56.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.357074 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:44:56.357166 systemd[1]: Stopped ignition-files.service. Nov 1 00:44:56.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.361946 systemd[1]: Stopping ignition-mount.service... Nov 1 00:44:56.364613 systemd[1]: Stopping iscsid.service... Nov 1 00:44:56.366771 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:44:56.368374 iscsid[725]: iscsid shutting down. Nov 1 00:44:56.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.367559 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:44:56.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.370865 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:44:56.379392 ignition[874]: INFO : Ignition 2.14.0 Nov 1 00:44:56.379392 ignition[874]: INFO : Stage: umount Nov 1 00:44:56.379392 ignition[874]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:44:56.379392 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:44:56.379392 ignition[874]: INFO : umount: umount passed Nov 1 00:44:56.379392 ignition[874]: INFO : Ignition finished successfully Nov 1 00:44:56.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.372027 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:44:56.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.372175 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:44:56.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.374873 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:44:56.374968 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:44:56.377967 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:44:56.378046 systemd[1]: Stopped iscsid.service. Nov 1 00:44:56.379744 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:44:56.379829 systemd[1]: Stopped ignition-mount.service. Nov 1 00:44:56.382427 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:44:56.382496 systemd[1]: Closed iscsid.socket. Nov 1 00:44:56.384540 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:44:56.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.384575 systemd[1]: Stopped ignition-disks.service. Nov 1 00:44:56.387560 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:44:56.387593 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:44:56.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.391131 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:44:56.391163 systemd[1]: Stopped ignition-setup.service. Nov 1 00:44:56.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.394022 systemd[1]: Stopping iscsiuio.service... Nov 1 00:44:56.398036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:44:56.442000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:44:56.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.398522 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:44:56.398596 systemd[1]: Stopped iscsiuio.service. Nov 1 00:44:56.401035 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:44:56.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.401124 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:44:56.404433 systemd[1]: Stopped target network.target. Nov 1 00:44:56.406680 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:44:56.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.406711 systemd[1]: Closed iscsiuio.socket. Nov 1 00:44:56.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.409322 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:44:56.472000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:44:56.472000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:44:56.412172 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:44:56.476000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:44:56.476000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:44:56.476000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:44:56.417877 systemd-networkd[712]: eth0: DHCPv6 lease lost Nov 1 00:44:56.476000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:44:56.419461 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:44:56.419601 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:44:56.423610 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:44:56.423806 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:44:56.427215 systemd[1]: Stopping network-cleanup.service... Nov 1 00:44:56.429302 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:44:56.429346 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:44:56.429466 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:44:56.429499 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:44:56.431971 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:44:56.432010 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:44:56.434812 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:44:56.499052 systemd-journald[197]: Received SIGTERM from PID 1 (n/a). Nov 1 00:44:56.437044 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:44:56.437457 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:44:56.437535 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:44:56.443533 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:44:56.443634 systemd[1]: Stopped network-cleanup.service. Nov 1 00:44:56.447470 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:44:56.447575 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:44:56.451065 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:44:56.451126 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:44:56.453378 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:44:56.453418 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:44:56.456122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:44:56.456178 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:44:56.458910 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:44:56.458974 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:44:56.461561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:44:56.461600 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:44:56.462413 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:44:56.462598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:44:56.462636 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:44:56.463442 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:44:56.463516 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:44:56.463841 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:44:56.463872 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:44:56.466731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:44:56.466797 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:44:56.467251 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:44:56.467995 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:44:56.472567 systemd[1]: Switching root. Nov 1 00:44:56.506688 systemd-journald[197]: Journal stopped Nov 1 00:44:59.768998 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:44:59.769056 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:44:59.769068 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:44:59.769078 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:44:59.769089 kernel: SELinux: policy capability open_perms=1 Nov 1 00:44:59.769098 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:44:59.769111 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:44:59.769121 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:44:59.769131 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:44:59.769140 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:44:59.769149 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:44:59.769162 systemd[1]: Successfully loaded SELinux policy in 53.608ms. Nov 1 00:44:59.769180 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.405ms. Nov 1 00:44:59.769194 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:44:59.769206 systemd[1]: Detected virtualization kvm. Nov 1 00:44:59.769217 systemd[1]: Detected architecture x86-64. Nov 1 00:44:59.769227 systemd[1]: Detected first boot. Nov 1 00:44:59.769237 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:44:59.769249 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:44:59.769259 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:44:59.769269 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:44:59.769284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:44:59.769295 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:44:59.769306 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:44:59.769316 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:44:59.769328 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:44:59.769338 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:44:59.769348 systemd[1]: Created slice system-getty.slice. Nov 1 00:44:59.769359 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:44:59.769369 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:44:59.769380 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:44:59.769390 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:44:59.769400 systemd[1]: Created slice user.slice. Nov 1 00:44:59.769410 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:44:59.769421 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:44:59.769432 systemd[1]: Set up automount boot.automount. Nov 1 00:44:59.769443 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:44:59.769454 systemd[1]: Reached target integritysetup.target. Nov 1 00:44:59.769464 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:44:59.769474 systemd[1]: Reached target remote-fs.target. Nov 1 00:44:59.769485 systemd[1]: Reached target slices.target. Nov 1 00:44:59.769495 systemd[1]: Reached target swap.target. Nov 1 00:44:59.769511 systemd[1]: Reached target torcx.target. Nov 1 00:44:59.769521 systemd[1]: Reached target veritysetup.target. Nov 1 00:44:59.769532 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:44:59.769542 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:44:59.769552 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:44:59.769570 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:44:59.769580 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:44:59.769595 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:44:59.769608 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:44:59.769618 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:44:59.769630 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:44:59.769640 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:44:59.769650 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:44:59.769662 systemd[1]: Mounting media.mount... Nov 1 00:44:59.769676 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:44:59.769698 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:44:59.769712 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:44:59.769725 systemd[1]: Mounting tmp.mount... Nov 1 00:44:59.769737 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:44:59.769759 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:44:59.769773 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:44:59.769786 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:44:59.769798 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:44:59.769811 systemd[1]: Starting modprobe@drm.service... Nov 1 00:44:59.769839 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:44:59.769853 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:44:59.769867 systemd[1]: Starting modprobe@loop.service... Nov 1 00:44:59.769881 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:44:59.769898 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:44:59.769908 kernel: loop: module loaded Nov 1 00:44:59.769917 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:44:59.769927 kernel: fuse: init (API version 7.34) Nov 1 00:44:59.769940 systemd[1]: Starting systemd-journald.service... Nov 1 00:44:59.769950 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:44:59.769961 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:44:59.769971 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:44:59.769981 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:44:59.769997 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:44:59.770008 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:44:59.770018 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:44:59.770028 systemd[1]: Mounted media.mount. Nov 1 00:44:59.770038 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:44:59.770047 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:44:59.770057 systemd[1]: Mounted tmp.mount. Nov 1 00:44:59.770067 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:44:59.770077 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:44:59.770089 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:44:59.770101 systemd-journald[1028]: Journal started Nov 1 00:44:59.770141 systemd-journald[1028]: Runtime Journal (/run/log/journal/da85435e52a64d87bb68f9262d2e4aed) is 6.0M, max 48.5M, 42.5M free. Nov 1 00:44:59.567000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:44:59.567000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:44:59.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.767000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:44:59.767000 audit[1028]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc06379fa0 a2=4000 a3=7ffc0637a03c items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:59.767000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:44:59.773670 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:44:59.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.775837 systemd[1]: Started systemd-journald.service. Nov 1 00:44:59.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.778879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:44:59.779174 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:44:59.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.781406 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:44:59.781672 systemd[1]: Finished modprobe@drm.service. Nov 1 00:44:59.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.784004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:44:59.784264 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:44:59.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.786706 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:44:59.787044 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:44:59.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.789344 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:44:59.789673 systemd[1]: Finished modprobe@loop.service. Nov 1 00:44:59.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.792177 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:44:59.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.794981 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:44:59.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.797812 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:44:59.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.800305 systemd[1]: Reached target network-pre.target. Nov 1 00:44:59.803920 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:44:59.807426 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:44:59.809414 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:44:59.811711 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:44:59.815025 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:44:59.817316 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:44:59.818863 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:44:59.821933 systemd-journald[1028]: Time spent on flushing to /var/log/journal/da85435e52a64d87bb68f9262d2e4aed is 46.985ms for 1035 entries. Nov 1 00:44:59.821933 systemd-journald[1028]: System Journal (/var/log/journal/da85435e52a64d87bb68f9262d2e4aed) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:44:59.886592 systemd-journald[1028]: Received client request to flush runtime journal. Nov 1 00:44:59.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.821354 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:44:59.823129 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:44:59.831668 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:44:59.841079 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:44:59.887461 udevadm[1058]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:44:59.844077 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:44:59.847138 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:44:59.849260 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:44:59.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.851394 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:44:59.855869 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:44:59.859708 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:44:59.868602 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:44:59.872129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:44:59.887873 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:44:59.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:59.896775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:45:00.562843 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:45:00.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.566073 systemd[1]: Starting systemd-udevd.service... Nov 1 00:45:00.587001 systemd-udevd[1068]: Using default interface naming scheme 'v252'. Nov 1 00:45:00.600572 systemd[1]: Started systemd-udevd.service. Nov 1 00:45:00.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.604961 systemd[1]: Starting systemd-networkd.service... Nov 1 00:45:00.609657 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:45:00.650281 systemd[1]: Started systemd-userdbd.service. Nov 1 00:45:00.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.659640 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:45:00.663840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:45:00.677840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:45:00.685843 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:45:00.707000 audit[1071]: AVC avc: denied { confidentiality } for pid=1071 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:45:00.713191 systemd-networkd[1077]: lo: Link UP Nov 1 00:45:00.713213 systemd-networkd[1077]: lo: Gained carrier Nov 1 00:45:00.714345 systemd-networkd[1077]: Enumeration completed Nov 1 00:45:00.714497 systemd-networkd[1077]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:45:00.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.714520 systemd[1]: Started systemd-networkd.service. Nov 1 00:45:00.717233 systemd-networkd[1077]: eth0: Link UP Nov 1 00:45:00.717239 systemd-networkd[1077]: eth0: Gained carrier Nov 1 00:45:00.707000 audit[1071]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559415d53020 a1=338ec a2=7ff7c327ebc5 a3=5 items=110 ppid=1068 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:00.707000 audit: CWD cwd="/" Nov 1 00:45:00.707000 audit: PATH item=0 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=1 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=2 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=3 name=(null) inode=14489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=4 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=5 name=(null) inode=14490 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=6 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=7 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=8 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=9 name=(null) inode=14492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=10 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=11 name=(null) inode=14493 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=12 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=13 name=(null) inode=14494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=14 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=15 name=(null) inode=14495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=16 name=(null) inode=14491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=17 name=(null) inode=14496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=18 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=19 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=20 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=21 name=(null) inode=14498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=22 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=23 name=(null) inode=14499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=24 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=25 name=(null) inode=14500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=26 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=27 name=(null) inode=14501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=28 name=(null) inode=14497 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=29 name=(null) inode=14502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=30 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=31 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=32 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=33 name=(null) inode=14504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=34 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=35 name=(null) inode=14505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=36 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=37 name=(null) inode=14506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=38 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=39 name=(null) inode=14507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=40 name=(null) inode=14503 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=41 name=(null) inode=14508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=42 name=(null) inode=14488 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=43 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=44 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=45 name=(null) inode=14510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=46 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=47 name=(null) inode=14511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=48 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=49 name=(null) inode=14512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=50 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=51 name=(null) inode=14513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=52 name=(null) inode=14509 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=53 name=(null) inode=14514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=54 name=(null) inode=44 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=55 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=56 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=57 name=(null) inode=14516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=58 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=59 name=(null) inode=14517 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=60 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=61 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=62 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=63 name=(null) inode=14519 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=64 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=65 name=(null) inode=14520 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=66 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=67 name=(null) inode=14521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=68 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=69 name=(null) inode=14522 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=70 name=(null) inode=14518 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=71 name=(null) inode=14523 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=72 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=73 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=74 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=75 name=(null) inode=14525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=76 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=77 name=(null) inode=14526 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=78 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=79 name=(null) inode=14527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=80 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=81 name=(null) inode=14528 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=82 name=(null) inode=14524 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=83 name=(null) inode=14529 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=84 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=85 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=86 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=87 name=(null) inode=14531 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=88 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=89 name=(null) inode=14532 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=90 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=91 name=(null) inode=14533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=92 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=93 name=(null) inode=14534 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=94 name=(null) inode=14530 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=95 name=(null) inode=14535 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=96 name=(null) inode=14515 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=97 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=98 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=99 name=(null) inode=14537 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=100 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=101 name=(null) inode=14538 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=102 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=103 name=(null) inode=14539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=104 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=105 name=(null) inode=14540 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=106 name=(null) inode=14536 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=107 name=(null) inode=14541 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PATH item=109 name=(null) inode=14569 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:45:00.707000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:45:00.729013 systemd-networkd[1077]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:45:00.736847 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:45:00.740837 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:45:00.750841 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:45:00.755872 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:45:00.756010 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:45:00.822413 kernel: kvm: Nested Virtualization enabled Nov 1 00:45:00.822537 kernel: SVM: kvm: Nested Paging enabled Nov 1 00:45:00.822554 kernel: SVM: Virtual VMLOAD VMSAVE supported Nov 1 00:45:00.823630 kernel: SVM: Virtual GIF supported Nov 1 00:45:00.842863 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:45:00.874437 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:45:00.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.877749 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:45:00.887166 lvm[1104]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:45:00.918210 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:45:00.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.919966 systemd[1]: Reached target cryptsetup.target. Nov 1 00:45:00.923095 systemd[1]: Starting lvm2-activation.service... Nov 1 00:45:00.927477 lvm[1106]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:45:00.956299 systemd[1]: Finished lvm2-activation.service. Nov 1 00:45:00.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.957883 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:45:00.959282 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:45:00.959311 systemd[1]: Reached target local-fs.target. Nov 1 00:45:00.960743 systemd[1]: Reached target machines.target. Nov 1 00:45:00.963619 systemd[1]: Starting ldconfig.service... Nov 1 00:45:00.965153 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:45:00.965207 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:00.966386 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:45:00.969135 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:45:00.972356 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:45:00.975607 systemd[1]: Starting systemd-sysext.service... Nov 1 00:45:00.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:00.977734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:45:00.979942 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1109 (bootctl) Nov 1 00:45:00.981114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:45:00.986624 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:45:00.989911 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:45:00.990125 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:45:01.002844 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:45:01.019323 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Nov 1 00:45:01.019323 systemd-fsck[1120]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 00:45:01.020615 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:45:01.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.024441 systemd[1]: Mounting boot.mount... Nov 1 00:45:01.034136 systemd[1]: Mounted boot.mount. Nov 1 00:45:01.260412 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:45:01.261848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:45:01.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.265295 kernel: kauditd_printk_skb: 199 callbacks suppressed Nov 1 00:45:01.265355 kernel: audit: type=1130 audit(1761957901.262:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.301972 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:45:01.302575 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:45:01.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.313841 kernel: audit: type=1130 audit(1761957901.304:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.318835 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:45:01.323983 (sd-sysext)[1131]: Using extensions 'kubernetes'. Nov 1 00:45:01.324278 (sd-sysext)[1131]: Merged extensions into '/usr'. Nov 1 00:45:01.340214 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:45:01.341583 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:45:01.343254 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.344347 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:45:01.346694 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:45:01.349232 systemd[1]: Starting modprobe@loop.service... Nov 1 00:45:01.350606 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.350713 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:01.350811 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:45:01.351669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:45:01.351795 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:45:01.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.356276 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:45:01.360844 kernel: audit: type=1130 audit(1761957901.352:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.360897 kernel: audit: type=1131 audit(1761957901.352:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.368351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:45:01.368507 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:45:01.369591 ldconfig[1108]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:45:01.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.370339 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:45:01.370466 systemd[1]: Finished modprobe@loop.service. Nov 1 00:45:01.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.376851 kernel: audit: type=1130 audit(1761957901.369:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.376897 kernel: audit: type=1131 audit(1761957901.369:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.384315 systemd[1]: Finished ldconfig.service. Nov 1 00:45:01.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.389855 kernel: audit: type=1130 audit(1761957901.382:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.389905 kernel: audit: type=1131 audit(1761957901.382:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.397652 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:45:01.397779 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.399056 systemd[1]: Finished systemd-sysext.service. Nov 1 00:45:01.402862 kernel: audit: type=1130 audit(1761957901.396:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.405735 systemd[1]: Starting ensure-sysext.service... Nov 1 00:45:01.409841 kernel: audit: type=1130 audit(1761957901.403:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.412280 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:45:01.416087 systemd[1]: Reloading. Nov 1 00:45:01.430927 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:45:01.431982 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:45:01.433497 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:45:01.474788 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-11-01T00:45:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:45:01.475152 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2025-11-01T00:45:01Z" level=info msg="torcx already run" Nov 1 00:45:01.539029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:45:01.539046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:45:01.558808 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:45:01.618486 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:45:01.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.622765 systemd[1]: Starting audit-rules.service... Nov 1 00:45:01.625162 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:45:01.627760 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:45:01.630590 systemd[1]: Starting systemd-resolved.service... Nov 1 00:45:01.633254 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:45:01.635611 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:45:01.637614 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:45:01.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.641000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.646896 augenrules[1237]: No rules Nov 1 00:45:01.645000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:45:01.645000 audit[1237]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0ca70990 a2=420 a3=0 items=0 ppid=1215 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:01.645000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:45:01.647128 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:45:01.649801 systemd[1]: Finished audit-rules.service. Nov 1 00:45:01.651873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.653366 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:45:01.655684 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:45:01.657937 systemd[1]: Starting modprobe@loop.service... Nov 1 00:45:01.659113 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.659215 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:01.661017 systemd[1]: Starting systemd-update-done.service... Nov 1 00:45:01.664930 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:45:01.666403 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:45:01.668480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:45:01.668676 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:45:01.670427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:45:01.670608 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:45:01.673427 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:45:01.673609 systemd[1]: Finished modprobe@loop.service. Nov 1 00:45:01.675945 systemd[1]: Finished systemd-update-done.service. Nov 1 00:45:01.678561 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:45:01.678661 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.680193 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.681459 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:45:01.684140 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:45:01.686977 systemd[1]: Starting modprobe@loop.service... Nov 1 00:45:01.689306 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.689407 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:01.689487 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:45:01.690335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:45:01.690517 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:45:01.692368 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:45:01.692491 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:45:01.694945 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:45:01.695216 systemd[1]: Finished modprobe@loop.service. Nov 1 00:45:01.696996 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:45:01.697073 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.700373 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.701560 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:45:01.704880 systemd[1]: Starting modprobe@drm.service... Nov 1 00:45:01.708573 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:45:01.711740 systemd[1]: Starting modprobe@loop.service... Nov 1 00:45:01.713331 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:45:01.713526 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:01.716191 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:45:01.718248 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:45:01.719524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:45:01.719697 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:45:01.721615 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:45:01.721757 systemd[1]: Finished modprobe@drm.service. Nov 1 00:45:01.723751 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:45:01.723888 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:45:01.725911 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:45:01.726084 systemd[1]: Finished modprobe@loop.service. Nov 1 00:45:01.728039 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:45:02.654415 systemd-timesyncd[1226]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:45:02.654478 systemd-timesyncd[1226]: Initial clock synchronization to Sat 2025-11-01 00:45:02.654323 UTC. Nov 1 00:45:02.655298 systemd[1]: Reached target time-set.target. Nov 1 00:45:02.657598 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:45:02.657736 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:45:02.659361 systemd[1]: Finished ensure-sysext.service. Nov 1 00:45:02.669151 systemd-resolved[1223]: Positive Trust Anchors: Nov 1 00:45:02.669167 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:45:02.669202 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:45:02.676790 systemd-resolved[1223]: Defaulting to hostname 'linux'. Nov 1 00:45:02.678284 systemd[1]: Started systemd-resolved.service. Nov 1 00:45:02.679756 systemd[1]: Reached target network.target. Nov 1 00:45:02.681040 systemd[1]: Reached target nss-lookup.target. Nov 1 00:45:02.682401 systemd[1]: Reached target sysinit.target. Nov 1 00:45:02.683766 systemd[1]: Started motdgen.path. Nov 1 00:45:02.684910 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:45:02.686844 systemd[1]: Started logrotate.timer. Nov 1 00:45:02.688147 systemd[1]: Started mdadm.timer. Nov 1 00:45:02.689282 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:45:02.690668 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:45:02.690760 systemd[1]: Reached target paths.target. Nov 1 00:45:02.691976 systemd[1]: Reached target timers.target. Nov 1 00:45:02.693573 systemd[1]: Listening on dbus.socket. Nov 1 00:45:02.696181 systemd[1]: Starting docker.socket... Nov 1 00:45:02.698594 systemd[1]: Listening on sshd.socket. Nov 1 00:45:02.699893 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:02.700286 systemd[1]: Listening on docker.socket. Nov 1 00:45:02.701540 systemd[1]: Reached target sockets.target. Nov 1 00:45:02.702819 systemd[1]: Reached target basic.target. Nov 1 00:45:02.704432 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:45:02.704482 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:45:02.704507 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:45:02.705784 systemd[1]: Starting containerd.service... Nov 1 00:45:02.707967 systemd[1]: Starting dbus.service... Nov 1 00:45:02.710375 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:45:02.712941 systemd[1]: Starting extend-filesystems.service... Nov 1 00:45:02.714464 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:45:02.715547 jq[1277]: false Nov 1 00:45:02.715802 systemd[1]: Starting motdgen.service... Nov 1 00:45:02.718134 systemd[1]: Starting prepare-helm.service... Nov 1 00:45:02.720935 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:45:02.724176 systemd[1]: Starting sshd-keygen.service... Nov 1 00:45:02.728184 systemd[1]: Starting systemd-logind.service... Nov 1 00:45:02.729755 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:45:02.729840 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:45:02.731324 systemd[1]: Starting update-engine.service... Nov 1 00:45:02.733825 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:45:02.738878 dbus-daemon[1276]: [system] SELinux support is enabled Nov 1 00:45:02.742140 extend-filesystems[1278]: Found loop1 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found sr0 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda1 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda2 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda3 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found usr Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda4 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda6 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda7 Nov 1 00:45:02.742140 extend-filesystems[1278]: Found vda9 Nov 1 00:45:02.742140 extend-filesystems[1278]: Checking size of /dev/vda9 Nov 1 00:45:02.811311 extend-filesystems[1278]: Resized partition /dev/vda9 Nov 1 00:45:02.792234 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:45:02.813478 jq[1296]: true Nov 1 00:45:02.813672 extend-filesystems[1309]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:45:02.817680 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:45:02.792597 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:45:02.817904 update_engine[1295]: I1101 00:45:02.813885 1295 main.cc:92] Flatcar Update Engine starting Nov 1 00:45:02.793036 systemd[1]: Started dbus.service. Nov 1 00:45:02.819233 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:45:02.819520 systemd[1]: Finished motdgen.service. Nov 1 00:45:02.849401 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:45:02.849696 update_engine[1295]: I1101 00:45:02.824262 1295 update_check_scheduler.cc:74] Next update check in 6m29s Nov 1 00:45:02.822466 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:45:02.822737 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:45:02.926041 tar[1310]: linux-amd64/LICENSE Nov 1 00:45:02.936756 jq[1311]: true Nov 1 00:45:02.937261 tar[1310]: linux-amd64/helm Nov 1 00:45:02.937617 systemd[1]: Started update-engine.service. Nov 1 00:45:02.939549 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:45:02.942619 extend-filesystems[1309]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:45:02.942619 extend-filesystems[1309]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:45:02.942619 extend-filesystems[1309]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:45:02.941255 systemd[1]: Started locksmithd.service. Nov 1 00:45:02.955572 extend-filesystems[1278]: Resized filesystem in /dev/vda9 Nov 1 00:45:02.945298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:45:02.945422 systemd[1]: Reached target system-config.target. Nov 1 00:45:02.947160 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:45:02.947192 systemd[1]: Reached target user-config.target. Nov 1 00:45:02.948696 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:45:02.948993 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:45:02.949210 systemd[1]: Finished extend-filesystems.service. Nov 1 00:45:02.958366 systemd-logind[1290]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:45:02.958728 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:45:02.959803 systemd-logind[1290]: New seat seat0. Nov 1 00:45:02.962361 systemd[1]: Started systemd-logind.service. Nov 1 00:45:02.979630 env[1313]: time="2025-11-01T00:45:02.979530630Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:45:03.044548 env[1313]: time="2025-11-01T00:45:03.044492015Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:45:03.045119 env[1313]: time="2025-11-01T00:45:03.045086761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.047570 env[1313]: time="2025-11-01T00:45:03.047494686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:45:03.047774 env[1313]: time="2025-11-01T00:45:03.047731480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048119 env[1313]: time="2025-11-01T00:45:03.048083931Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048119 env[1313]: time="2025-11-01T00:45:03.048103508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048119 env[1313]: time="2025-11-01T00:45:03.048122023Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:45:03.048239 env[1313]: time="2025-11-01T00:45:03.048131811Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048239 env[1313]: time="2025-11-01T00:45:03.048212763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048507 env[1313]: time="2025-11-01T00:45:03.048483651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048668 env[1313]: time="2025-11-01T00:45:03.048631508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:45:03.048668 env[1313]: time="2025-11-01T00:45:03.048648831Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:45:03.048743 env[1313]: time="2025-11-01T00:45:03.048700057Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:45:03.048743 env[1313]: time="2025-11-01T00:45:03.048711599Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:45:03.076896 bash[1337]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:45:03.077829 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:45:03.116845 env[1313]: time="2025-11-01T00:45:03.116788547Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:45:03.116845 env[1313]: time="2025-11-01T00:45:03.116846025Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116864119Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116908853Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116928139Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116943558Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116960199Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.116998 env[1313]: time="2025-11-01T00:45:03.116989634Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.117146 env[1313]: time="2025-11-01T00:45:03.117025522Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.117146 env[1313]: time="2025-11-01T00:45:03.117044748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.117146 env[1313]: time="2025-11-01T00:45:03.117060617Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.117146 env[1313]: time="2025-11-01T00:45:03.117076367Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.117685429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.117803150Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118170619Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118202238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118214331Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118265848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118278040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118289873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118300052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118311363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118326231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118336590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118370744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118383468Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:45:03.119370 env[1313]: time="2025-11-01T00:45:03.118573845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118606156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118617738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118627927Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118642504Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118652903Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118685875Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:45:03.119776 env[1313]: time="2025-11-01T00:45:03.118736610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:45:03.119905 env[1313]: time="2025-11-01T00:45:03.118979817Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:45:03.119905 env[1313]: time="2025-11-01T00:45:03.119037364Z" level=info msg="Connect containerd service" Nov 1 00:45:03.119905 env[1313]: time="2025-11-01T00:45:03.119085685Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:45:03.122099 env[1313]: time="2025-11-01T00:45:03.122059722Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:45:03.122431 env[1313]: time="2025-11-01T00:45:03.122414378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:45:03.122532 env[1313]: time="2025-11-01T00:45:03.122516249Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:45:03.122733 systemd[1]: Started containerd.service. Nov 1 00:45:03.124498 env[1313]: time="2025-11-01T00:45:03.124480723Z" level=info msg="containerd successfully booted in 0.145746s" Nov 1 00:45:03.127431 env[1313]: time="2025-11-01T00:45:03.127395789Z" level=info msg="Start subscribing containerd event" Nov 1 00:45:03.127563 env[1313]: time="2025-11-01T00:45:03.127546212Z" level=info msg="Start recovering state" Nov 1 00:45:03.154584 env[1313]: time="2025-11-01T00:45:03.141647266Z" level=info msg="Start event monitor" Nov 1 00:45:03.154584 env[1313]: time="2025-11-01T00:45:03.141734860Z" level=info msg="Start snapshots syncer" Nov 1 00:45:03.154584 env[1313]: time="2025-11-01T00:45:03.141762462Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:45:03.154584 env[1313]: time="2025-11-01T00:45:03.141854445Z" level=info msg="Start streaming server" Nov 1 00:45:03.165749 locksmithd[1329]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:45:03.261462 systemd-networkd[1077]: eth0: Gained IPv6LL Nov 1 00:45:03.263854 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:45:03.266062 systemd[1]: Reached target network-online.target. Nov 1 00:45:03.269746 systemd[1]: Starting kubelet.service... Nov 1 00:45:03.334275 sshd_keygen[1303]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:45:03.356692 systemd[1]: Finished sshd-keygen.service. Nov 1 00:45:03.360296 systemd[1]: Starting issuegen.service... Nov 1 00:45:03.365449 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:45:03.365694 systemd[1]: Finished issuegen.service. Nov 1 00:45:03.417731 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:45:03.428108 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:45:03.433443 systemd[1]: Started getty@tty1.service. Nov 1 00:45:03.437213 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:45:03.439336 systemd[1]: Reached target getty.target. Nov 1 00:45:03.531029 tar[1310]: linux-amd64/README.md Nov 1 00:45:03.536527 systemd[1]: Finished prepare-helm.service. Nov 1 00:45:04.366114 systemd[1]: Started kubelet.service. Nov 1 00:45:04.368243 systemd[1]: Reached target multi-user.target. Nov 1 00:45:04.371421 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:45:04.378727 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:45:04.378930 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:45:04.382063 systemd[1]: Startup finished in 7.590s (kernel) + 6.909s (userspace) = 14.500s. Nov 1 00:45:04.820985 systemd[1]: Created slice system-sshd.slice. Nov 1 00:45:04.822366 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:54756.service. Nov 1 00:45:04.862299 sshd[1386]: Accepted publickey for core from 10.0.0.1 port 54756 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:04.865052 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:04.874387 systemd[1]: Created slice user-500.slice. Nov 1 00:45:04.875453 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:45:04.878201 systemd-logind[1290]: New session 1 of user core. Nov 1 00:45:04.885895 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:45:04.887368 systemd[1]: Starting user@500.service... Nov 1 00:45:04.891560 (systemd)[1392]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:04.971109 systemd[1392]: Queued start job for default target default.target. Nov 1 00:45:04.971500 systemd[1392]: Reached target paths.target. Nov 1 00:45:04.971741 systemd[1392]: Reached target sockets.target. Nov 1 00:45:04.971764 systemd[1392]: Reached target timers.target. Nov 1 00:45:04.971780 systemd[1392]: Reached target basic.target. Nov 1 00:45:04.971931 systemd[1]: Started user@500.service. Nov 1 00:45:04.972893 systemd[1]: Started session-1.scope. Nov 1 00:45:04.973148 systemd[1392]: Reached target default.target. Nov 1 00:45:04.973469 systemd[1392]: Startup finished in 70ms. Nov 1 00:45:05.025144 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:54768.service. Nov 1 00:45:05.046044 kubelet[1378]: E1101 00:45:05.045979 1378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:45:05.048044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:45:05.048244 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:45:05.063203 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 54768 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.064268 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.068038 systemd-logind[1290]: New session 2 of user core. Nov 1 00:45:05.068426 systemd[1]: Started session-2.scope. Nov 1 00:45:05.120232 sshd[1401]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:05.122558 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:54782.service. Nov 1 00:45:05.122937 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:54768.service: Deactivated successfully. Nov 1 00:45:05.123748 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:45:05.124231 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:45:05.125147 systemd-logind[1290]: Removed session 2. Nov 1 00:45:05.152570 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 54782 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.153549 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.156652 systemd-logind[1290]: New session 3 of user core. Nov 1 00:45:05.157424 systemd[1]: Started session-3.scope. Nov 1 00:45:05.207775 sshd[1408]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:05.210785 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:54788.service. Nov 1 00:45:05.211704 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:54782.service: Deactivated successfully. Nov 1 00:45:05.212557 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:45:05.213234 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:45:05.214391 systemd-logind[1290]: Removed session 3. Nov 1 00:45:05.240845 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 54788 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.241817 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.245012 systemd-logind[1290]: New session 4 of user core. Nov 1 00:45:05.245669 systemd[1]: Started session-4.scope. Nov 1 00:45:05.303585 sshd[1414]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:05.306542 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:54788.service: Deactivated successfully. Nov 1 00:45:05.307947 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:45:05.310049 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:54802.service. Nov 1 00:45:05.311583 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:45:05.312561 systemd-logind[1290]: Removed session 4. Nov 1 00:45:05.342119 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 54802 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.343243 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.348014 systemd-logind[1290]: New session 5 of user core. Nov 1 00:45:05.348875 systemd[1]: Started session-5.scope. Nov 1 00:45:05.408379 sudo[1427]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:45:05.408576 sudo[1427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:45:05.419607 dbus-daemon[1276]: Ѝ\xef\xb5 V: received setenforce notice (enforcing=38010416) Nov 1 00:45:05.422148 sudo[1427]: pam_unix(sudo:session): session closed for user root Nov 1 00:45:05.424112 sshd[1423]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:05.427059 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:54814.service. Nov 1 00:45:05.428590 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:54802.service: Deactivated successfully. Nov 1 00:45:05.429547 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:45:05.429556 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:45:05.430807 systemd-logind[1290]: Removed session 5. Nov 1 00:45:05.458807 sshd[1429]: Accepted publickey for core from 10.0.0.1 port 54814 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.460417 sshd[1429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.465264 systemd-logind[1290]: New session 6 of user core. Nov 1 00:45:05.466167 systemd[1]: Started session-6.scope. Nov 1 00:45:05.527471 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:45:05.527897 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:45:05.530934 sudo[1436]: pam_unix(sudo:session): session closed for user root Nov 1 00:45:05.535735 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:45:05.535914 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:45:05.547710 systemd[1]: Stopping audit-rules.service... Nov 1 00:45:05.548000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:45:05.548000 audit[1439]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1901dae0 a2=420 a3=0 items=0 ppid=1 pid=1439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:05.548000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:45:05.549264 auditctl[1439]: No rules Nov 1 00:45:05.549845 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:45:05.550386 systemd[1]: Stopped audit-rules.service. Nov 1 00:45:05.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.553730 systemd[1]: Starting audit-rules.service... Nov 1 00:45:05.570097 augenrules[1457]: No rules Nov 1 00:45:05.571018 systemd[1]: Finished audit-rules.service. Nov 1 00:45:05.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.572071 sudo[1435]: pam_unix(sudo:session): session closed for user root Nov 1 00:45:05.571000 audit[1435]: USER_END pid=1435 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.571000 audit[1435]: CRED_DISP pid=1435 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.573419 sshd[1429]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:05.573000 audit[1429]: USER_END pid=1429 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.574000 audit[1429]: CRED_DISP pid=1429 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.576628 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:54814.service: Deactivated successfully. Nov 1 00:45:05.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.128:22-10.0.0.1:54814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.577838 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:45:05.579901 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:54826.service. Nov 1 00:45:05.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:54826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.580623 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:45:05.581625 systemd-logind[1290]: Removed session 6. Nov 1 00:45:05.609000 audit[1464]: USER_ACCT pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.610563 sshd[1464]: Accepted publickey for core from 10.0.0.1 port 54826 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:45:05.610000 audit[1464]: CRED_ACQ pid=1464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.610000 audit[1464]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1a4d6d0 a2=3 a3=0 items=0 ppid=1 pid=1464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:05.610000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:05.611393 sshd[1464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:05.615079 systemd-logind[1290]: New session 7 of user core. Nov 1 00:45:05.615830 systemd[1]: Started session-7.scope. Nov 1 00:45:05.619000 audit[1464]: USER_START pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.620000 audit[1467]: CRED_ACQ pid=1467 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:05.668000 audit[1468]: USER_ACCT pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.668000 audit[1468]: CRED_REFR pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.668761 sudo[1468]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:45:05.668964 sudo[1468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:45:05.670000 audit[1468]: USER_START pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:05.701130 systemd[1]: Starting docker.service... Nov 1 00:45:05.826513 env[1480]: time="2025-11-01T00:45:05.826442065Z" level=info msg="Starting up" Nov 1 00:45:05.828027 env[1480]: time="2025-11-01T00:45:05.827998313Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:45:05.828027 env[1480]: time="2025-11-01T00:45:05.828015806Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:45:05.828092 env[1480]: time="2025-11-01T00:45:05.828038789Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:45:05.828092 env[1480]: time="2025-11-01T00:45:05.828048918Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:45:05.830233 env[1480]: time="2025-11-01T00:45:05.830201335Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:45:05.830233 env[1480]: time="2025-11-01T00:45:05.830216513Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:45:05.830233 env[1480]: time="2025-11-01T00:45:05.830226502Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:45:05.830233 env[1480]: time="2025-11-01T00:45:05.830233154Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:45:05.837498 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport289610108-merged.mount: Deactivated successfully. Nov 1 00:45:06.625990 env[1480]: time="2025-11-01T00:45:06.625898100Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:45:06.625990 env[1480]: time="2025-11-01T00:45:06.625959826Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:45:06.626289 env[1480]: time="2025-11-01T00:45:06.626242456Z" level=info msg="Loading containers: start." Nov 1 00:45:06.685000 audit[1514]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.685000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffef825c660 a2=0 a3=7ffef825c64c items=0 ppid=1480 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.685000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:45:06.687000 audit[1516]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.687000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc866f3920 a2=0 a3=7ffc866f390c items=0 ppid=1480 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.687000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:45:06.689000 audit[1518]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.689000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4efe8f90 a2=0 a3=7ffc4efe8f7c items=0 ppid=1480 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:45:06.691000 audit[1520]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.691000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc5fe1f590 a2=0 a3=7ffc5fe1f57c items=0 ppid=1480 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.691000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:45:06.693000 audit[1522]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.693000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3f57f060 a2=0 a3=7fff3f57f04c items=0 ppid=1480 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.693000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:45:06.720000 audit[1527]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.720000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff533043a0 a2=0 a3=7fff5330438c items=0 ppid=1480 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.720000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:45:06.938000 audit[1529]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.938000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdad0227c0 a2=0 a3=7ffdad0227ac items=0 ppid=1480 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.938000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:45:06.939000 audit[1531]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.939000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff8f7e02f0 a2=0 a3=7fff8f7e02dc items=0 ppid=1480 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.939000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:45:06.941000 audit[1533]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.941000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcfd1bc890 a2=0 a3=7ffcfd1bc87c items=0 ppid=1480 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.941000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:45:06.949000 audit[1537]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.949000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffd69ae4f0 a2=0 a3=7fffd69ae4dc items=0 ppid=1480 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.949000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:45:06.958000 audit[1538]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:06.958000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff0259aaa0 a2=0 a3=7fff0259aa8c items=0 ppid=1480 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:06.958000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:45:06.968369 kernel: Initializing XFRM netlink socket Nov 1 00:45:06.997108 env[1480]: time="2025-11-01T00:45:06.997057897Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:45:07.015000 audit[1546]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.015000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff8b9285f0 a2=0 a3=7fff8b9285dc items=0 ppid=1480 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.015000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:45:07.030000 audit[1549]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.030000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc1c789110 a2=0 a3=7ffc1c7890fc items=0 ppid=1480 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.030000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:45:07.033000 audit[1552]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.033000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc653e28a0 a2=0 a3=7ffc653e288c items=0 ppid=1480 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.033000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:45:07.034000 audit[1554]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.034000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff157dd5c0 a2=0 a3=7fff157dd5ac items=0 ppid=1480 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.034000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:45:07.036000 audit[1556]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.036000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffced99cba0 a2=0 a3=7ffced99cb8c items=0 ppid=1480 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.036000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:45:07.038000 audit[1558]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.038000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffef5d927c0 a2=0 a3=7ffef5d927ac items=0 ppid=1480 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.038000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:45:07.039000 audit[1560]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.039000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc1035c6a0 a2=0 a3=7ffc1035c68c items=0 ppid=1480 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.039000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:45:07.046000 audit[1563]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.046000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe8e4ab8f0 a2=0 a3=7ffe8e4ab8dc items=0 ppid=1480 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.046000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:45:07.048000 audit[1565]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.048000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fffbc9ddaa0 a2=0 a3=7fffbc9dda8c items=0 ppid=1480 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.048000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:45:07.049000 audit[1567]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.049000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffce72431b0 a2=0 a3=7ffce724319c items=0 ppid=1480 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:45:07.051000 audit[1569]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.051000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc02b31270 a2=0 a3=7ffc02b3125c items=0 ppid=1480 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.051000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:45:07.052329 systemd-networkd[1077]: docker0: Link UP Nov 1 00:45:07.060000 audit[1573]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.060000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdda4ad5f0 a2=0 a3=7ffdda4ad5dc items=0 ppid=1480 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.060000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:45:07.066000 audit[1574]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:07.066000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffde4bcd2b0 a2=0 a3=7ffde4bcd29c items=0 ppid=1480 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:07.066000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:45:07.067675 env[1480]: time="2025-11-01T00:45:07.067630436Z" level=info msg="Loading containers: done." Nov 1 00:45:07.109068 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3536502437-merged.mount: Deactivated successfully. Nov 1 00:45:07.113064 env[1480]: time="2025-11-01T00:45:07.113015469Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:45:07.113215 env[1480]: time="2025-11-01T00:45:07.113190317Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:45:07.113303 env[1480]: time="2025-11-01T00:45:07.113278332Z" level=info msg="Daemon has completed initialization" Nov 1 00:45:07.129686 systemd[1]: Started docker.service. Nov 1 00:45:07.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:07.137621 env[1480]: time="2025-11-01T00:45:07.137571400Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:45:08.073591 env[1313]: time="2025-11-01T00:45:08.073509067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:45:08.774161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3151584126.mount: Deactivated successfully. Nov 1 00:45:10.820509 env[1313]: time="2025-11-01T00:45:10.820446656Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:10.822428 env[1313]: time="2025-11-01T00:45:10.822385552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:10.824262 env[1313]: time="2025-11-01T00:45:10.824217658Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:10.826113 env[1313]: time="2025-11-01T00:45:10.826074540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:10.827623 env[1313]: time="2025-11-01T00:45:10.827578390Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:45:10.828295 env[1313]: time="2025-11-01T00:45:10.828261822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:45:12.707597 env[1313]: time="2025-11-01T00:45:12.707532275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:12.710860 env[1313]: time="2025-11-01T00:45:12.710794964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:12.714073 env[1313]: time="2025-11-01T00:45:12.714023198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:12.716918 env[1313]: time="2025-11-01T00:45:12.716876569Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:12.717830 env[1313]: time="2025-11-01T00:45:12.717795122Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:45:12.719504 env[1313]: time="2025-11-01T00:45:12.719467508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:45:15.060617 env[1313]: time="2025-11-01T00:45:15.060536197Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:15.063779 env[1313]: time="2025-11-01T00:45:15.063737331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:15.066414 env[1313]: time="2025-11-01T00:45:15.066391638Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:15.068403 env[1313]: time="2025-11-01T00:45:15.068381881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:15.069031 env[1313]: time="2025-11-01T00:45:15.068999299Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:45:15.069594 env[1313]: time="2025-11-01T00:45:15.069573846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:45:15.155912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:45:15.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.156126 systemd[1]: Stopped kubelet.service. Nov 1 00:45:15.157776 kernel: kauditd_printk_skb: 100 callbacks suppressed Nov 1 00:45:15.157837 kernel: audit: type=1130 audit(1761957915.155:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.158158 systemd[1]: Starting kubelet.service... Nov 1 00:45:15.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.168890 kernel: audit: type=1131 audit(1761957915.155:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.263686 systemd[1]: Started kubelet.service. Nov 1 00:45:15.274936 kernel: audit: type=1130 audit(1761957915.263:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.636711 kubelet[1619]: E1101 00:45:15.636639 1619 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:45:15.639501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:45:15.639639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:45:15.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:45:15.646365 kernel: audit: type=1131 audit(1761957915.639:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:45:16.985256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953336425.mount: Deactivated successfully. Nov 1 00:45:18.516815 env[1313]: time="2025-11-01T00:45:18.516740023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:18.520099 env[1313]: time="2025-11-01T00:45:18.520060671Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:18.522364 env[1313]: time="2025-11-01T00:45:18.522297806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:18.524239 env[1313]: time="2025-11-01T00:45:18.524185135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:18.524668 env[1313]: time="2025-11-01T00:45:18.524601637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:45:18.525253 env[1313]: time="2025-11-01T00:45:18.525218523Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:45:19.251317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292744565.mount: Deactivated successfully. Nov 1 00:45:20.457257 env[1313]: time="2025-11-01T00:45:20.457186954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:20.489140 env[1313]: time="2025-11-01T00:45:20.489083722Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:20.573074 env[1313]: time="2025-11-01T00:45:20.573020934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:20.745257 env[1313]: time="2025-11-01T00:45:20.745124910Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:20.746138 env[1313]: time="2025-11-01T00:45:20.746101231Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:45:20.746653 env[1313]: time="2025-11-01T00:45:20.746629522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:45:21.786160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674570505.mount: Deactivated successfully. Nov 1 00:45:21.797186 env[1313]: time="2025-11-01T00:45:21.797110178Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:21.799787 env[1313]: time="2025-11-01T00:45:21.799751141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:21.801312 env[1313]: time="2025-11-01T00:45:21.801268637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:21.803654 env[1313]: time="2025-11-01T00:45:21.803613845Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:21.804164 env[1313]: time="2025-11-01T00:45:21.804127047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:45:21.804752 env[1313]: time="2025-11-01T00:45:21.804724939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:45:22.405706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513239312.mount: Deactivated successfully. Nov 1 00:45:25.655524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:45:25.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.655723 systemd[1]: Stopped kubelet.service. Nov 1 00:45:25.657148 systemd[1]: Starting kubelet.service... Nov 1 00:45:25.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.667368 kernel: audit: type=1130 audit(1761957925.655:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.667441 kernel: audit: type=1131 audit(1761957925.655:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.756226 systemd[1]: Started kubelet.service. Nov 1 00:45:25.762390 kernel: audit: type=1130 audit(1761957925.756:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.797951 kubelet[1635]: E1101 00:45:25.797877 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:45:25.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:45:25.807452 kernel: audit: type=1131 audit(1761957925.800:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:45:25.800135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:45:25.800301 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:45:26.254945 env[1313]: time="2025-11-01T00:45:26.254866172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:26.257441 env[1313]: time="2025-11-01T00:45:26.257372632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:26.259456 env[1313]: time="2025-11-01T00:45:26.259423659Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:26.261629 env[1313]: time="2025-11-01T00:45:26.261582477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:26.262477 env[1313]: time="2025-11-01T00:45:26.262435667Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:45:29.192476 systemd[1]: Stopped kubelet.service. Nov 1 00:45:29.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.194405 systemd[1]: Starting kubelet.service... Nov 1 00:45:29.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.204579 kernel: audit: type=1130 audit(1761957929.192:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.204636 kernel: audit: type=1131 audit(1761957929.192:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.217425 systemd[1]: Reloading. Nov 1 00:45:29.288625 /usr/lib/systemd/system-generators/torcx-generator[1693]: time="2025-11-01T00:45:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:45:29.288654 /usr/lib/systemd/system-generators/torcx-generator[1693]: time="2025-11-01T00:45:29Z" level=info msg="torcx already run" Nov 1 00:45:29.608054 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:45:29.608073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:45:29.627980 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:45:29.710961 systemd[1]: Started kubelet.service. Nov 1 00:45:29.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.714776 systemd[1]: Stopping kubelet.service... Nov 1 00:45:29.715469 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:45:29.715685 systemd[1]: Stopped kubelet.service. Nov 1 00:45:29.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.717232 systemd[1]: Starting kubelet.service... Nov 1 00:45:29.722778 kernel: audit: type=1130 audit(1761957929.710:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.722878 kernel: audit: type=1131 audit(1761957929.715:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.807684 systemd[1]: Started kubelet.service. Nov 1 00:45:29.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.815415 kernel: audit: type=1130 audit(1761957929.807:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.847845 kubelet[1757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:45:29.847845 kubelet[1757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:45:29.847845 kubelet[1757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:45:29.848265 kubelet[1757]: I1101 00:45:29.847902 1757 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:45:29.967608 kubelet[1757]: I1101 00:45:29.967555 1757 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:45:29.967608 kubelet[1757]: I1101 00:45:29.967589 1757 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:45:29.967910 kubelet[1757]: I1101 00:45:29.967884 1757 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:45:29.986783 kubelet[1757]: E1101 00:45:29.986711 1757 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:29.988999 kubelet[1757]: I1101 00:45:29.988966 1757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:45:29.993744 kubelet[1757]: E1101 00:45:29.993704 1757 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:45:29.993744 kubelet[1757]: I1101 00:45:29.993734 1757 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:45:29.997447 kubelet[1757]: I1101 00:45:29.997423 1757 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:45:29.998639 kubelet[1757]: I1101 00:45:29.998598 1757 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:45:29.998805 kubelet[1757]: I1101 00:45:29.998629 1757 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:45:29.998944 kubelet[1757]: I1101 00:45:29.998809 1757 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:45:29.998944 kubelet[1757]: I1101 00:45:29.998817 1757 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:45:29.998944 kubelet[1757]: I1101 00:45:29.998935 1757 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:45:30.001426 kubelet[1757]: I1101 00:45:30.001385 1757 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:45:30.001493 kubelet[1757]: I1101 00:45:30.001432 1757 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:45:30.001493 kubelet[1757]: I1101 00:45:30.001466 1757 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:45:30.001493 kubelet[1757]: I1101 00:45:30.001481 1757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:45:30.025437 kubelet[1757]: W1101 00:45:30.025372 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:30.025624 kubelet[1757]: E1101 00:45:30.025453 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:30.025624 kubelet[1757]: W1101 00:45:30.025584 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:30.025693 kubelet[1757]: I1101 00:45:30.025610 1757 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:45:30.025693 kubelet[1757]: E1101 00:45:30.025643 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:30.026223 kubelet[1757]: I1101 00:45:30.026198 1757 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:45:30.028288 kubelet[1757]: W1101 00:45:30.028270 1757 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:45:30.036797 kubelet[1757]: I1101 00:45:30.036776 1757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:45:30.036870 kubelet[1757]: I1101 00:45:30.036828 1757 server.go:1287] "Started kubelet" Nov 1 00:45:30.036995 kubelet[1757]: I1101 00:45:30.036969 1757 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:45:30.042561 kubelet[1757]: I1101 00:45:30.042535 1757 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:45:30.042000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:30.043744 kubelet[1757]: I1101 00:45:30.043185 1757 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:45:30.043744 kubelet[1757]: I1101 00:45:30.043224 1757 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:45:30.043744 kubelet[1757]: I1101 00:45:30.043300 1757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:45:30.044935 kubelet[1757]: I1101 00:45:30.044038 1757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:45:30.044935 kubelet[1757]: I1101 00:45:30.044270 1757 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:45:30.044935 kubelet[1757]: I1101 00:45:30.044421 1757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:45:30.047037 kubelet[1757]: E1101 00:45:30.046875 1757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:30.047037 kubelet[1757]: I1101 00:45:30.046916 1757 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:45:30.047313 kubelet[1757]: I1101 00:45:30.047183 1757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:45:30.047313 kubelet[1757]: I1101 00:45:30.047235 1757 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:45:30.047696 kubelet[1757]: W1101 00:45:30.047652 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:30.047750 kubelet[1757]: E1101 00:45:30.047709 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:30.048149 kubelet[1757]: I1101 00:45:30.048133 1757 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:45:30.042000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:30.042000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000926e40 a1=c00005b7b8 a2=c000926e10 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:30.042000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:30.042000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:30.042000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c73060 a1=c00005b7d0 a2=c000926ed0 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:30.049375 kernel: audit: type=1400 audit(1761957930.042:191): avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:30.045000 audit[1770]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.045000 audit[1770]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc6ec4b480 a2=0 a3=7ffc6ec4b46c items=0 ppid=1757 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:45:30.046000 audit[1771]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1771 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.046000 audit[1771]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb6379010 a2=0 a3=7ffeb6378ffc items=0 ppid=1757 pid=1771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.046000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:45:30.048000 audit[1773]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.048000 audit[1773]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffe39a43f0 a2=0 a3=7fffe39a43dc items=0 ppid=1757 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:45:30.049695 kubelet[1757]: E1101 00:45:30.048138 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Nov 1 00:45:30.049695 kubelet[1757]: E1101 00:45:30.048214 1757 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:45:30.049695 kubelet[1757]: I1101 00:45:30.049037 1757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:45:30.050192 kubelet[1757]: I1101 00:45:30.050167 1757 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:45:30.050645 kubelet[1757]: E1101 00:45:30.049536 1757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bb66f1069da0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:45:30.036796832 +0000 UTC m=+0.223524008,LastTimestamp:2025-11-01 00:45:30.036796832 +0000 UTC m=+0.223524008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:45:30.050000 audit[1775]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1775 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.050000 audit[1775]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffffaed2930 a2=0 a3=7ffffaed291c items=0 ppid=1757 pid=1775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.050000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:45:30.057000 audit[1778]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.057000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc02e92090 a2=0 a3=7ffc02e9207c items=0 ppid=1757 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:45:30.057703 kubelet[1757]: I1101 00:45:30.057651 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:45:30.058000 audit[1779]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1779 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:30.058000 audit[1779]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff560fcc10 a2=0 a3=7fff560fcbfc items=0 ppid=1757 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:45:30.058611 kubelet[1757]: I1101 00:45:30.058555 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:45:30.058611 kubelet[1757]: I1101 00:45:30.058582 1757 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:45:30.058680 kubelet[1757]: I1101 00:45:30.058608 1757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:45:30.058680 kubelet[1757]: I1101 00:45:30.058619 1757 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:45:30.058746 kubelet[1757]: E1101 00:45:30.058681 1757 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:45:30.059000 audit[1781]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1781 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.059000 audit[1781]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5ac1ba30 a2=0 a3=7fff5ac1ba1c items=0 ppid=1757 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:45:30.060000 audit[1782]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.060000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff581c78b0 a2=0 a3=7fff581c789c items=0 ppid=1757 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:45:30.061000 audit[1783]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1783 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:30.061000 audit[1783]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff947009b0 a2=0 a3=7fff9470099c items=0 ppid=1757 pid=1783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:45:30.061000 audit[1784]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:30.061000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1d41dc50 a2=0 a3=7fff1d41dc3c items=0 ppid=1757 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:45:30.062000 audit[1786]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:30.062000 audit[1786]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff6dd09350 a2=0 a3=7fff6dd0933c items=0 ppid=1757 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.062000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:45:30.063000 audit[1788]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:30.063000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc25e01d80 a2=0 a3=7ffc25e01d6c items=0 ppid=1757 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.063000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:45:30.065435 kubelet[1757]: I1101 00:45:30.065419 1757 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:45:30.065435 kubelet[1757]: I1101 00:45:30.065431 1757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:45:30.065523 kubelet[1757]: I1101 00:45:30.065444 1757 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:45:30.065860 kubelet[1757]: W1101 00:45:30.065819 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:30.065934 kubelet[1757]: E1101 00:45:30.065874 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:30.147415 kubelet[1757]: E1101 00:45:30.147370 1757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:30.159580 kubelet[1757]: E1101 00:45:30.159549 1757 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:45:30.247906 kubelet[1757]: E1101 00:45:30.247779 1757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:30.250297 kubelet[1757]: E1101 00:45:30.250261 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Nov 1 00:45:30.348440 kubelet[1757]: E1101 00:45:30.348411 1757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:30.360636 kubelet[1757]: E1101 00:45:30.360593 1757 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:45:30.449015 kubelet[1757]: E1101 00:45:30.448948 1757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:30.522436 kubelet[1757]: I1101 00:45:30.522284 1757 policy_none.go:49] "None policy: Start" Nov 1 00:45:30.522436 kubelet[1757]: I1101 00:45:30.522364 1757 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:45:30.522436 kubelet[1757]: I1101 00:45:30.522387 1757 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:45:30.528860 kubelet[1757]: I1101 00:45:30.528826 1757 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:45:30.529000 audit[1757]: AVC avc: denied { mac_admin } for pid=1757 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:30.529000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:30.529000 audit[1757]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00101a8d0 a1=c00094f2c0 a2=c00101a8a0 a3=25 items=0 ppid=1 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:30.529774 kubelet[1757]: I1101 00:45:30.529528 1757 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:45:30.529774 kubelet[1757]: I1101 00:45:30.529663 1757 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:45:30.529774 kubelet[1757]: I1101 00:45:30.529674 1757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:45:30.530283 kubelet[1757]: I1101 00:45:30.529963 1757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:45:30.530849 kubelet[1757]: E1101 00:45:30.530818 1757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:45:30.530909 kubelet[1757]: E1101 00:45:30.530869 1757 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:45:30.631779 kubelet[1757]: I1101 00:45:30.631734 1757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:45:30.632211 kubelet[1757]: E1101 00:45:30.632180 1757 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Nov 1 00:45:30.650910 kubelet[1757]: E1101 00:45:30.650880 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Nov 1 00:45:30.767984 kubelet[1757]: E1101 00:45:30.767947 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:30.769548 kubelet[1757]: E1101 00:45:30.769508 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:30.770202 kubelet[1757]: E1101 00:45:30.770171 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:30.833844 kubelet[1757]: I1101 00:45:30.833707 1757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:45:30.834452 kubelet[1757]: E1101 00:45:30.834413 1757 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Nov 1 00:45:30.850769 kubelet[1757]: I1101 00:45:30.850735 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:30.851088 kubelet[1757]: I1101 00:45:30.850775 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:30.851088 kubelet[1757]: I1101 00:45:30.850801 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:30.851088 kubelet[1757]: I1101 00:45:30.850821 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:30.851088 kubelet[1757]: I1101 00:45:30.850876 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:30.851088 kubelet[1757]: I1101 00:45:30.850898 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:30.851217 kubelet[1757]: I1101 00:45:30.850919 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:30.851217 kubelet[1757]: I1101 00:45:30.850940 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:30.851217 kubelet[1757]: I1101 00:45:30.850960 1757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:31.068558 kubelet[1757]: E1101 00:45:31.068520 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.069063 env[1313]: time="2025-11-01T00:45:31.069026932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e21f0276cd19c1d4cf822672e5496c8,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:31.070119 kubelet[1757]: E1101 00:45:31.070091 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.070473 kubelet[1757]: E1101 00:45:31.070450 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.070536 env[1313]: time="2025-11-01T00:45:31.070491969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:31.070722 env[1313]: time="2025-11-01T00:45:31.070691894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:31.189750 kubelet[1757]: W1101 00:45:31.189681 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:31.189750 kubelet[1757]: E1101 00:45:31.189752 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.128:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:31.206590 kubelet[1757]: W1101 00:45:31.206493 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:31.206590 kubelet[1757]: E1101 00:45:31.206581 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:31.235902 kubelet[1757]: I1101 00:45:31.235867 1757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:45:31.236232 kubelet[1757]: E1101 00:45:31.236186 1757 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Nov 1 00:45:31.370892 kubelet[1757]: W1101 00:45:31.370807 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:31.371037 kubelet[1757]: E1101 00:45:31.370898 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:31.442891 kubelet[1757]: W1101 00:45:31.442734 1757 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Nov 1 00:45:31.442891 kubelet[1757]: E1101 00:45:31.442810 1757 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.128:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:45:31.451677 kubelet[1757]: E1101 00:45:31.451631 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Nov 1 00:45:31.716022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931910678.mount: Deactivated successfully. Nov 1 00:45:31.720417 env[1313]: time="2025-11-01T00:45:31.720372718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.723257 env[1313]: time="2025-11-01T00:45:31.723213796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.724826 env[1313]: time="2025-11-01T00:45:31.724772919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.725728 env[1313]: time="2025-11-01T00:45:31.725700519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.727443 env[1313]: time="2025-11-01T00:45:31.727398333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.728824 env[1313]: time="2025-11-01T00:45:31.728787108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.730237 env[1313]: time="2025-11-01T00:45:31.730185690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.732257 env[1313]: time="2025-11-01T00:45:31.732232088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.734178 env[1313]: time="2025-11-01T00:45:31.734143112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.735470 env[1313]: time="2025-11-01T00:45:31.735391172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.736988 env[1313]: time="2025-11-01T00:45:31.736956989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.737768 env[1313]: time="2025-11-01T00:45:31.737731722Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:31.752332 env[1313]: time="2025-11-01T00:45:31.752259947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:31.752332 env[1313]: time="2025-11-01T00:45:31.752306735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:31.752332 env[1313]: time="2025-11-01T00:45:31.752319910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:31.752604 env[1313]: time="2025-11-01T00:45:31.752524604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f3b1b2b8a887b84b37dd177af94c154a0bd42e2e80006d90d00444d350a6ec1 pid=1800 runtime=io.containerd.runc.v2 Nov 1 00:45:31.771658 env[1313]: time="2025-11-01T00:45:31.771596440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:31.771658 env[1313]: time="2025-11-01T00:45:31.771660320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:31.771836 env[1313]: time="2025-11-01T00:45:31.771682682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:31.771892 env[1313]: time="2025-11-01T00:45:31.771816373Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6958f3f1f367c74d2242d1f7d1217c5937e686c0d3a823288bec61b3efad6cbb pid=1834 runtime=io.containerd.runc.v2 Nov 1 00:45:31.780071 env[1313]: time="2025-11-01T00:45:31.779982958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:31.780071 env[1313]: time="2025-11-01T00:45:31.780022883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:31.780071 env[1313]: time="2025-11-01T00:45:31.780033513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:31.780239 env[1313]: time="2025-11-01T00:45:31.780143860Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9cbd068644c729f9272142664b62291441bae07ef48f619072ccf3ed3bad48e7 pid=1855 runtime=io.containerd.runc.v2 Nov 1 00:45:31.821508 env[1313]: time="2025-11-01T00:45:31.821464601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f3b1b2b8a887b84b37dd177af94c154a0bd42e2e80006d90d00444d350a6ec1\"" Nov 1 00:45:31.822942 kubelet[1757]: E1101 00:45:31.822723 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.824954 env[1313]: time="2025-11-01T00:45:31.824923057Z" level=info msg="CreateContainer within sandbox \"2f3b1b2b8a887b84b37dd177af94c154a0bd42e2e80006d90d00444d350a6ec1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:45:31.838208 env[1313]: time="2025-11-01T00:45:31.838144091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"9cbd068644c729f9272142664b62291441bae07ef48f619072ccf3ed3bad48e7\"" Nov 1 00:45:31.838810 kubelet[1757]: E1101 00:45:31.838766 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.840779 env[1313]: time="2025-11-01T00:45:31.840747603Z" level=info msg="CreateContainer within sandbox \"9cbd068644c729f9272142664b62291441bae07ef48f619072ccf3ed3bad48e7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:45:31.841487 env[1313]: time="2025-11-01T00:45:31.841438850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3e21f0276cd19c1d4cf822672e5496c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6958f3f1f367c74d2242d1f7d1217c5937e686c0d3a823288bec61b3efad6cbb\"" Nov 1 00:45:31.841869 kubelet[1757]: E1101 00:45:31.841841 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:31.842962 env[1313]: time="2025-11-01T00:45:31.842922933Z" level=info msg="CreateContainer within sandbox \"6958f3f1f367c74d2242d1f7d1217c5937e686c0d3a823288bec61b3efad6cbb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:45:31.846234 env[1313]: time="2025-11-01T00:45:31.846187375Z" level=info msg="CreateContainer within sandbox \"2f3b1b2b8a887b84b37dd177af94c154a0bd42e2e80006d90d00444d350a6ec1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"99c934c25cfc17e53464c545c29e5177b1e33793ef09f9f289789e5bcf73ce37\"" Nov 1 00:45:31.846648 env[1313]: time="2025-11-01T00:45:31.846619205Z" level=info msg="StartContainer for \"99c934c25cfc17e53464c545c29e5177b1e33793ef09f9f289789e5bcf73ce37\"" Nov 1 00:45:31.862918 env[1313]: time="2025-11-01T00:45:31.862880430Z" level=info msg="CreateContainer within sandbox \"9cbd068644c729f9272142664b62291441bae07ef48f619072ccf3ed3bad48e7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ca0c725a110ccd6559c69442f1c02f6a23eed986dd6eeedd304e9cf35a4128da\"" Nov 1 00:45:31.863404 env[1313]: time="2025-11-01T00:45:31.863384185Z" level=info msg="StartContainer for \"ca0c725a110ccd6559c69442f1c02f6a23eed986dd6eeedd304e9cf35a4128da\"" Nov 1 00:45:31.873320 env[1313]: time="2025-11-01T00:45:31.873287227Z" level=info msg="CreateContainer within sandbox \"6958f3f1f367c74d2242d1f7d1217c5937e686c0d3a823288bec61b3efad6cbb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c654d40a5db4ee794caf3c32cd58e6550fb82ac97bb397ff193d84bd19a0b710\"" Nov 1 00:45:31.873951 env[1313]: time="2025-11-01T00:45:31.873933118Z" level=info msg="StartContainer for \"c654d40a5db4ee794caf3c32cd58e6550fb82ac97bb397ff193d84bd19a0b710\"" Nov 1 00:45:31.907550 env[1313]: time="2025-11-01T00:45:31.907505007Z" level=info msg="StartContainer for \"99c934c25cfc17e53464c545c29e5177b1e33793ef09f9f289789e5bcf73ce37\" returns successfully" Nov 1 00:45:31.923995 env[1313]: time="2025-11-01T00:45:31.923949236Z" level=info msg="StartContainer for \"ca0c725a110ccd6559c69442f1c02f6a23eed986dd6eeedd304e9cf35a4128da\" returns successfully" Nov 1 00:45:31.940552 env[1313]: time="2025-11-01T00:45:31.940491960Z" level=info msg="StartContainer for \"c654d40a5db4ee794caf3c32cd58e6550fb82ac97bb397ff193d84bd19a0b710\" returns successfully" Nov 1 00:45:31.958497 kubelet[1757]: E1101 00:45:31.958381 1757 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873bb66f1069da0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:45:30.036796832 +0000 UTC m=+0.223524008,LastTimestamp:2025-11-01 00:45:30.036796832 +0000 UTC m=+0.223524008,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:45:32.038239 kubelet[1757]: I1101 00:45:32.037730 1757 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:45:32.070265 kubelet[1757]: E1101 00:45:32.070231 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:32.070587 kubelet[1757]: E1101 00:45:32.070574 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:32.072170 kubelet[1757]: E1101 00:45:32.072156 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:32.072397 kubelet[1757]: E1101 00:45:32.072378 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:32.073839 kubelet[1757]: E1101 00:45:32.073821 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:32.074097 kubelet[1757]: E1101 00:45:32.074070 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:33.076660 kubelet[1757]: E1101 00:45:33.076596 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:33.077189 kubelet[1757]: E1101 00:45:33.076831 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:33.077374 kubelet[1757]: E1101 00:45:33.077338 1757 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:45:33.077512 kubelet[1757]: E1101 00:45:33.077492 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:33.488790 kubelet[1757]: E1101 00:45:33.488744 1757 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:45:33.663935 kubelet[1757]: I1101 00:45:33.663890 1757 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:45:33.748055 kubelet[1757]: I1101 00:45:33.747934 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:33.752553 kubelet[1757]: E1101 00:45:33.752511 1757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:33.752553 kubelet[1757]: I1101 00:45:33.752544 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:33.754100 kubelet[1757]: E1101 00:45:33.754067 1757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:33.754100 kubelet[1757]: I1101 00:45:33.754089 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:33.755391 kubelet[1757]: E1101 00:45:33.755370 1757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:34.019933 kubelet[1757]: I1101 00:45:34.019819 1757 apiserver.go:52] "Watching apiserver" Nov 1 00:45:34.047568 kubelet[1757]: I1101 00:45:34.047526 1757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:45:34.077121 kubelet[1757]: I1101 00:45:34.077077 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:34.079005 kubelet[1757]: E1101 00:45:34.078983 1757 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:34.079174 kubelet[1757]: E1101 00:45:34.079129 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:34.606027 kubelet[1757]: I1101 00:45:34.605996 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:34.609732 kubelet[1757]: E1101 00:45:34.609707 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:34.674254 kubelet[1757]: I1101 00:45:34.674205 1757 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:34.852523 kubelet[1757]: E1101 00:45:34.852478 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:35.079496 kubelet[1757]: E1101 00:45:35.079465 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:35.079957 kubelet[1757]: E1101 00:45:35.079541 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:35.753282 systemd[1]: Reloading. Nov 1 00:45:35.807723 /usr/lib/systemd/system-generators/torcx-generator[2054]: time="2025-11-01T00:45:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:45:35.807764 /usr/lib/systemd/system-generators/torcx-generator[2054]: time="2025-11-01T00:45:35Z" level=info msg="torcx already run" Nov 1 00:45:35.882886 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:45:35.882903 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:45:35.900438 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:45:35.984814 systemd[1]: Stopping kubelet.service... Nov 1 00:45:36.005680 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:45:36.005954 systemd[1]: Stopped kubelet.service. Nov 1 00:45:36.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.007853 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 00:45:36.007939 kernel: audit: type=1131 audit(1761957936.004:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.014539 systemd[1]: Starting kubelet.service... Nov 1 00:45:36.135477 systemd[1]: Started kubelet.service. Nov 1 00:45:36.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.174585 kernel: audit: type=1130 audit(1761957936.135:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.194280 kubelet[2110]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:45:36.194280 kubelet[2110]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:45:36.194280 kubelet[2110]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:45:36.194744 kubelet[2110]: I1101 00:45:36.194364 2110 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:45:36.204074 kubelet[2110]: I1101 00:45:36.204021 2110 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:45:36.204074 kubelet[2110]: I1101 00:45:36.204051 2110 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:45:36.204359 kubelet[2110]: I1101 00:45:36.204320 2110 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:45:36.206579 kubelet[2110]: I1101 00:45:36.205465 2110 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:45:36.216507 kubelet[2110]: I1101 00:45:36.216428 2110 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:45:36.233869 kubelet[2110]: E1101 00:45:36.233826 2110 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:45:36.233869 kubelet[2110]: I1101 00:45:36.233860 2110 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:45:36.237564 kubelet[2110]: I1101 00:45:36.237538 2110 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:45:36.237972 kubelet[2110]: I1101 00:45:36.237933 2110 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:45:36.238120 kubelet[2110]: I1101 00:45:36.237959 2110 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:45:36.238235 kubelet[2110]: I1101 00:45:36.238121 2110 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:45:36.238235 kubelet[2110]: I1101 00:45:36.238132 2110 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:45:36.238235 kubelet[2110]: I1101 00:45:36.238184 2110 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:45:36.238308 kubelet[2110]: I1101 00:45:36.238283 2110 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:45:36.238308 kubelet[2110]: I1101 00:45:36.238301 2110 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:45:36.238363 kubelet[2110]: I1101 00:45:36.238318 2110 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:45:36.238363 kubelet[2110]: I1101 00:45:36.238328 2110 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:45:36.241000 audit[2110]: AVC avc: denied { mac_admin } for pid=2110 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.240220 2110 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.241034 2110 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.242214 2110 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.242249 2110 server.go:1287] "Started kubelet" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.243327 2110 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.243418 2110 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.243454 2110 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:45:36.244313 kubelet[2110]: I1101 00:45:36.243640 2110 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:45:36.244545 kubelet[2110]: I1101 00:45:36.244400 2110 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:45:36.245208 kubelet[2110]: I1101 00:45:36.245152 2110 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:45:36.245370 kubelet[2110]: I1101 00:45:36.245357 2110 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:45:36.248684 kubelet[2110]: I1101 00:45:36.248653 2110 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:45:36.249019 kubelet[2110]: I1101 00:45:36.248989 2110 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:45:36.251684 kernel: audit: type=1400 audit(1761957936.241:208): avc: denied { mac_admin } for pid=2110 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:36.251926 kernel: audit: type=1401 audit(1761957936.241:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:36.241000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:36.251977 kubelet[2110]: I1101 00:45:36.249085 2110 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:45:36.251977 kubelet[2110]: I1101 00:45:36.249185 2110 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:45:36.251977 kubelet[2110]: E1101 00:45:36.249264 2110 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:45:36.251977 kubelet[2110]: I1101 00:45:36.250163 2110 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:45:36.251977 kubelet[2110]: E1101 00:45:36.251260 2110 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:45:36.241000 audit[2110]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000930b10 a1=c000446948 a2=c000930ae0 a3=25 items=0 ppid=1 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.258314 kubelet[2110]: I1101 00:45:36.257086 2110 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:45:36.258314 kubelet[2110]: I1101 00:45:36.257180 2110 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:45:36.264763 kernel: audit: type=1300 audit(1761957936.241:208): arch=c000003e syscall=188 success=no exit=-22 a0=c000930b10 a1=c000446948 a2=c000930ae0 a3=25 items=0 ppid=1 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:36.266252 kubelet[2110]: I1101 00:45:36.266209 2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:45:36.268059 kubelet[2110]: I1101 00:45:36.268037 2110 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:45:36.268113 kubelet[2110]: I1101 00:45:36.268062 2110 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:45:36.268113 kubelet[2110]: I1101 00:45:36.268085 2110 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:45:36.268113 kubelet[2110]: I1101 00:45:36.268091 2110 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:45:36.268228 kubelet[2110]: E1101 00:45:36.268130 2110 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:45:36.272871 kernel: audit: type=1327 audit(1761957936.241:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:36.279373 kernel: audit: type=1400 audit(1761957936.242:209): avc: denied { mac_admin } for pid=2110 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:36.242000 audit[2110]: AVC avc: denied { mac_admin } for pid=2110 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:36.242000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:36.242000 audit[2110]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000449ae0 a1=c000446960 a2=c000930ba0 a3=25 items=0 ppid=1 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.290216 kernel: audit: type=1401 audit(1761957936.242:209): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:36.290282 kernel: audit: type=1300 audit(1761957936.242:209): arch=c000003e syscall=188 success=no exit=-22 a0=c000449ae0 a1=c000446960 a2=c000930ba0 a3=25 items=0 ppid=1 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:36.297254 kernel: audit: type=1327 audit(1761957936.242:209): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:36.306160 kubelet[2110]: I1101 00:45:36.306111 2110 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:45:36.306160 kubelet[2110]: I1101 00:45:36.306135 2110 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:45:36.306160 kubelet[2110]: I1101 00:45:36.306169 2110 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:45:36.306709 kubelet[2110]: I1101 00:45:36.306690 2110 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:45:36.306754 kubelet[2110]: I1101 00:45:36.306707 2110 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:45:36.306754 kubelet[2110]: I1101 00:45:36.306734 2110 policy_none.go:49] "None policy: Start" Nov 1 00:45:36.306754 kubelet[2110]: I1101 00:45:36.306746 2110 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:45:36.306878 kubelet[2110]: I1101 00:45:36.306761 2110 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:45:36.306919 kubelet[2110]: I1101 00:45:36.306899 2110 state_mem.go:75] "Updated machine memory state" Nov 1 00:45:36.308010 kubelet[2110]: I1101 00:45:36.307988 2110 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:45:36.306000 audit[2110]: AVC avc: denied { mac_admin } for pid=2110 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:45:36.306000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:45:36.306000 audit[2110]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c1da40 a1=c00054b4e8 a2=c000c1da10 a3=25 items=0 ppid=1 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:45:36.308243 kubelet[2110]: I1101 00:45:36.308061 2110 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:45:36.308243 kubelet[2110]: I1101 00:45:36.308196 2110 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:45:36.308243 kubelet[2110]: I1101 00:45:36.308210 2110 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:45:36.308456 kubelet[2110]: I1101 00:45:36.308434 2110 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:45:36.310202 kubelet[2110]: E1101 00:45:36.310181 2110 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:45:36.369161 kubelet[2110]: I1101 00:45:36.369120 2110 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:36.369326 kubelet[2110]: I1101 00:45:36.369195 2110 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:36.369326 kubelet[2110]: I1101 00:45:36.369249 2110 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.375032 kubelet[2110]: E1101 00:45:36.374992 2110 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:36.375467 kubelet[2110]: E1101 00:45:36.375438 2110 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.411720 kubelet[2110]: I1101 00:45:36.411676 2110 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:45:36.419524 kubelet[2110]: I1101 00:45:36.419484 2110 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:45:36.419603 kubelet[2110]: I1101 00:45:36.419557 2110 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:45:36.450461 kubelet[2110]: I1101 00:45:36.450409 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:36.450461 kubelet[2110]: I1101 00:45:36.450462 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.450619 kubelet[2110]: I1101 00:45:36.450504 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.450619 kubelet[2110]: I1101 00:45:36.450547 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.450619 kubelet[2110]: I1101 00:45:36.450586 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:36.450703 kubelet[2110]: I1101 00:45:36.450628 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:36.450703 kubelet[2110]: I1101 00:45:36.450665 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3e21f0276cd19c1d4cf822672e5496c8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3e21f0276cd19c1d4cf822672e5496c8\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:36.450752 kubelet[2110]: I1101 00:45:36.450697 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.450752 kubelet[2110]: I1101 00:45:36.450738 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:45:36.675841 kubelet[2110]: E1101 00:45:36.675770 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:36.676042 kubelet[2110]: E1101 00:45:36.675854 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:36.676042 kubelet[2110]: E1101 00:45:36.675930 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:37.239802 kubelet[2110]: I1101 00:45:37.239762 2110 apiserver.go:52] "Watching apiserver" Nov 1 00:45:37.249416 kubelet[2110]: I1101 00:45:37.249387 2110 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:45:37.278932 kubelet[2110]: I1101 00:45:37.278895 2110 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:37.279095 kubelet[2110]: I1101 00:45:37.279078 2110 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:37.279210 kubelet[2110]: E1101 00:45:37.279153 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:37.323500 kubelet[2110]: E1101 00:45:37.323459 2110 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:45:37.323909 kubelet[2110]: E1101 00:45:37.323882 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:37.325126 kubelet[2110]: E1101 00:45:37.324734 2110 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:45:37.325126 kubelet[2110]: E1101 00:45:37.324828 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:37.339924 kubelet[2110]: I1101 00:45:37.339813 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.339791477 podStartE2EDuration="3.339791477s" podCreationTimestamp="2025-11-01 00:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:45:37.339644935 +0000 UTC m=+1.199102735" watchObservedRunningTime="2025-11-01 00:45:37.339791477 +0000 UTC m=+1.199249267" Nov 1 00:45:37.340139 kubelet[2110]: I1101 00:45:37.339932 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.339926035 podStartE2EDuration="3.339926035s" podCreationTimestamp="2025-11-01 00:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:45:37.330893322 +0000 UTC m=+1.190351112" watchObservedRunningTime="2025-11-01 00:45:37.339926035 +0000 UTC m=+1.199383825" Nov 1 00:45:37.351463 kubelet[2110]: I1101 00:45:37.351398 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.351377485 podStartE2EDuration="1.351377485s" podCreationTimestamp="2025-11-01 00:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:45:37.351217488 +0000 UTC m=+1.210675278" watchObservedRunningTime="2025-11-01 00:45:37.351377485 +0000 UTC m=+1.210835295" Nov 1 00:45:38.282284 kubelet[2110]: E1101 00:45:38.282245 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:38.283752 kubelet[2110]: E1101 00:45:38.283729 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:39.283533 kubelet[2110]: E1101 00:45:39.283493 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:40.284594 kubelet[2110]: E1101 00:45:40.284565 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:40.451376 kubelet[2110]: E1101 00:45:40.451316 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:40.616654 kubelet[2110]: I1101 00:45:40.616549 2110 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:45:40.616910 env[1313]: time="2025-11-01T00:45:40.616873264Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:45:40.617270 kubelet[2110]: I1101 00:45:40.617052 2110 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:45:41.385377 kubelet[2110]: I1101 00:45:41.385309 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-xtables-lock\") pod \"kube-proxy-7ms9w\" (UID: \"cf76c0fb-5810-4336-87c9-fafbd84f2f4f\") " pod="kube-system/kube-proxy-7ms9w" Nov 1 00:45:41.385377 kubelet[2110]: I1101 00:45:41.385368 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-lib-modules\") pod \"kube-proxy-7ms9w\" (UID: \"cf76c0fb-5810-4336-87c9-fafbd84f2f4f\") " pod="kube-system/kube-proxy-7ms9w" Nov 1 00:45:41.385377 kubelet[2110]: I1101 00:45:41.385389 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjbp\" (UniqueName: \"kubernetes.io/projected/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-kube-api-access-5kjbp\") pod \"kube-proxy-7ms9w\" (UID: \"cf76c0fb-5810-4336-87c9-fafbd84f2f4f\") " pod="kube-system/kube-proxy-7ms9w" Nov 1 00:45:41.385904 kubelet[2110]: I1101 00:45:41.385415 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-kube-proxy\") pod \"kube-proxy-7ms9w\" (UID: \"cf76c0fb-5810-4336-87c9-fafbd84f2f4f\") " pod="kube-system/kube-proxy-7ms9w" Nov 1 00:45:41.490474 kubelet[2110]: E1101 00:45:41.490441 2110 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:45:41.490474 kubelet[2110]: E1101 00:45:41.490474 2110 projected.go:194] Error preparing data for projected volume kube-api-access-5kjbp for pod kube-system/kube-proxy-7ms9w: configmap "kube-root-ca.crt" not found Nov 1 00:45:41.490685 kubelet[2110]: E1101 00:45:41.490534 2110 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-kube-api-access-5kjbp podName:cf76c0fb-5810-4336-87c9-fafbd84f2f4f nodeName:}" failed. No retries permitted until 2025-11-01 00:45:41.990508024 +0000 UTC m=+5.849965814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5kjbp" (UniqueName: "kubernetes.io/projected/cf76c0fb-5810-4336-87c9-fafbd84f2f4f-kube-api-access-5kjbp") pod "kube-proxy-7ms9w" (UID: "cf76c0fb-5810-4336-87c9-fafbd84f2f4f") : configmap "kube-root-ca.crt" not found Nov 1 00:45:41.788109 kubelet[2110]: I1101 00:45:41.788044 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/627b7b4f-8ac8-4dac-9973-41fa3b85d0b3-var-lib-calico\") pod \"tigera-operator-7dcd859c48-76vkl\" (UID: \"627b7b4f-8ac8-4dac-9973-41fa3b85d0b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-76vkl" Nov 1 00:45:41.788109 kubelet[2110]: I1101 00:45:41.788100 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6q8d\" (UniqueName: \"kubernetes.io/projected/627b7b4f-8ac8-4dac-9973-41fa3b85d0b3-kube-api-access-w6q8d\") pod \"tigera-operator-7dcd859c48-76vkl\" (UID: \"627b7b4f-8ac8-4dac-9973-41fa3b85d0b3\") " pod="tigera-operator/tigera-operator-7dcd859c48-76vkl" Nov 1 00:45:41.894572 kubelet[2110]: I1101 00:45:41.894520 2110 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:45:42.065428 env[1313]: time="2025-11-01T00:45:42.064656536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-76vkl,Uid:627b7b4f-8ac8-4dac-9973-41fa3b85d0b3,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:45:42.088680 env[1313]: time="2025-11-01T00:45:42.088606554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:42.088680 env[1313]: time="2025-11-01T00:45:42.088655007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:42.088680 env[1313]: time="2025-11-01T00:45:42.088669154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:42.090241 env[1313]: time="2025-11-01T00:45:42.088814661Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63c168069900de1b1ca6e59c6e992b6a7bcf867af037b76546e9b802b3df3566 pid=2168 runtime=io.containerd.runc.v2 Nov 1 00:45:42.196762 kubelet[2110]: E1101 00:45:42.196718 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:42.197644 env[1313]: time="2025-11-01T00:45:42.197593007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ms9w,Uid:cf76c0fb-5810-4336-87c9-fafbd84f2f4f,Namespace:kube-system,Attempt:0,}" Nov 1 00:45:42.223227 env[1313]: time="2025-11-01T00:45:42.223143180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:42.223227 env[1313]: time="2025-11-01T00:45:42.223181734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:42.223227 env[1313]: time="2025-11-01T00:45:42.223191252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:42.223510 env[1313]: time="2025-11-01T00:45:42.223337752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/94a9e8cf520b95845e2a40c20f060f22ba309b3f4f2d05e612702ea7401d517d pid=2205 runtime=io.containerd.runc.v2 Nov 1 00:45:42.231005 env[1313]: time="2025-11-01T00:45:42.230949509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-76vkl,Uid:627b7b4f-8ac8-4dac-9973-41fa3b85d0b3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"63c168069900de1b1ca6e59c6e992b6a7bcf867af037b76546e9b802b3df3566\"" Nov 1 00:45:42.232645 env[1313]: time="2025-11-01T00:45:42.232618554Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:45:42.272594 env[1313]: time="2025-11-01T00:45:42.272551457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7ms9w,Uid:cf76c0fb-5810-4336-87c9-fafbd84f2f4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"94a9e8cf520b95845e2a40c20f060f22ba309b3f4f2d05e612702ea7401d517d\"" Nov 1 00:45:42.273430 kubelet[2110]: E1101 00:45:42.273394 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:42.276387 env[1313]: time="2025-11-01T00:45:42.276229046Z" level=info msg="CreateContainer within sandbox \"94a9e8cf520b95845e2a40c20f060f22ba309b3f4f2d05e612702ea7401d517d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:45:42.386685 env[1313]: time="2025-11-01T00:45:42.386492485Z" level=info msg="CreateContainer within sandbox \"94a9e8cf520b95845e2a40c20f060f22ba309b3f4f2d05e612702ea7401d517d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bba86afddc2ee764cca34782ca5f205af2eb8aec36dac2a1cd7cc8e99c25d94c\"" Nov 1 00:45:42.388736 env[1313]: time="2025-11-01T00:45:42.387585240Z" level=info msg="StartContainer for \"bba86afddc2ee764cca34782ca5f205af2eb8aec36dac2a1cd7cc8e99c25d94c\"" Nov 1 00:45:42.548470 env[1313]: time="2025-11-01T00:45:42.548401651Z" level=info msg="StartContainer for \"bba86afddc2ee764cca34782ca5f205af2eb8aec36dac2a1cd7cc8e99c25d94c\" returns successfully" Nov 1 00:45:42.693894 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 00:45:42.694187 kernel: audit: type=1325 audit(1761957942.686:211): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.686000 audit[2314]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.686000 audit[2314]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf785cc90 a2=0 a3=7ffcf785cc7c items=0 ppid=2264 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.686000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:45:42.707950 kernel: audit: type=1300 audit(1761957942.686:211): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf785cc90 a2=0 a3=7ffcf785cc7c items=0 ppid=2264 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.708110 kernel: audit: type=1327 audit(1761957942.686:211): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:45:42.689000 audit[2316]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.712243 kernel: audit: type=1325 audit(1761957942.689:212): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2316 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.689000 audit[2316]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3d492060 a2=0 a3=7ffe3d49204c items=0 ppid=2264 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:45:42.725251 kernel: audit: type=1300 audit(1761957942.689:212): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3d492060 a2=0 a3=7ffe3d49204c items=0 ppid=2264 pid=2316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.725384 kernel: audit: type=1327 audit(1761957942.689:212): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:45:42.725428 kernel: audit: type=1325 audit(1761957942.689:213): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.689000 audit[2317]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.689000 audit[2317]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfc270400 a2=0 a3=7ffdfc2703ec items=0 ppid=2264 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.738887 kernel: audit: type=1300 audit(1761957942.689:213): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfc270400 a2=0 a3=7ffdfc2703ec items=0 ppid=2264 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.739006 kernel: audit: type=1327 audit(1761957942.689:213): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:45:42.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:45:42.693000 audit[2319]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.747661 kernel: audit: type=1325 audit(1761957942.693:214): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.693000 audit[2319]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd1283ece0 a2=0 a3=7ffd1283eccc items=0 ppid=2264 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:45:42.698000 audit[2315]: NETFILTER_CFG table=mangle:42 family=10 entries=1 op=nft_register_chain pid=2315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.698000 audit[2315]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4da67980 a2=0 a3=7fff4da6796c items=0 ppid=2264 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:45:42.698000 audit[2321]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.698000 audit[2321]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd54ec3850 a2=0 a3=7ffd54ec383c items=0 ppid=2264 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:45:42.702000 audit[2324]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_rule pid=2324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.702000 audit[2324]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd51a823f0 a2=0 a3=7ffd51a823dc items=0 ppid=2264 pid=2324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:45:42.702000 audit[2325]: NETFILTER_CFG table=nat:45 family=10 entries=1 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.702000 audit[2325]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa9626710 a2=0 a3=7fffa96266fc items=0 ppid=2264 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:45:42.702000 audit[2326]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.702000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa41c1d30 a2=0 a3=7fffa41c1d1c items=0 ppid=2264 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.702000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:45:42.702000 audit[2327]: NETFILTER_CFG table=filter:47 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.702000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe46a333c0 a2=0 a3=7ffe46a333ac items=0 ppid=2264 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.702000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:45:42.707000 audit[2329]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.707000 audit[2329]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeadd99d40 a2=0 a3=7ffeadd99d2c items=0 ppid=2264 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:45:42.707000 audit[2330]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.707000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd5cefade0 a2=0 a3=7ffd5cefadcc items=0 ppid=2264 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:45:42.711000 audit[2332]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.711000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd05fbfef0 a2=0 a3=7ffd05fbfedc items=0 ppid=2264 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:45:42.711000 audit[2335]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.711000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd1d5c3ab0 a2=0 a3=7ffd1d5c3a9c items=0 ppid=2264 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:45:42.716000 audit[2336]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.716000 audit[2336]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0460c620 a2=0 a3=7ffd0460c60c items=0 ppid=2264 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:45:42.720000 audit[2338]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.720000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf0a51630 a2=0 a3=7ffcf0a5161c items=0 ppid=2264 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:45:42.720000 audit[2339]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.720000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff498a9b70 a2=0 a3=7fff498a9b5c items=0 ppid=2264 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:45:42.720000 audit[2341]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.720000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc9bd358f0 a2=0 a3=7ffc9bd358dc items=0 ppid=2264 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:45:42.724000 audit[2344]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.724000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffceed0b030 a2=0 a3=7ffceed0b01c items=0 ppid=2264 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:45:42.742000 audit[2347]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.742000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe46100c80 a2=0 a3=7ffe46100c6c items=0 ppid=2264 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:45:42.747000 audit[2348]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.747000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffedb5303f0 a2=0 a3=7ffedb5303dc items=0 ppid=2264 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:45:42.750000 audit[2350]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.750000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd4726e150 a2=0 a3=7ffd4726e13c items=0 ppid=2264 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:45:42.754000 audit[2353]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.754000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9f4157e0 a2=0 a3=7ffe9f4157cc items=0 ppid=2264 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:45:42.755000 audit[2354]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2354 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.755000 audit[2354]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd269d2c70 a2=0 a3=7ffd269d2c5c items=0 ppid=2264 pid=2354 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.755000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:45:42.760000 audit[2356]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:45:42.760000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff2ece22a0 a2=0 a3=7fff2ece228c items=0 ppid=2264 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:45:42.784000 audit[2362]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:42.784000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd35a0dd00 a2=0 a3=7ffd35a0dcec items=0 ppid=2264 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.793000 audit[2362]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:42.793000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd35a0dd00 a2=0 a3=7ffd35a0dcec items=0 ppid=2264 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.795000 audit[2367]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.795000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff053c0920 a2=0 a3=7fff053c090c items=0 ppid=2264 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:45:42.797000 audit[2371]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.797000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb32558b0 a2=0 a3=7ffdb325589c items=0 ppid=2264 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:45:42.801000 audit[2377]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.801000 audit[2377]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd8fffd020 a2=0 a3=7ffd8fffd00c items=0 ppid=2264 pid=2377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:45:42.802000 audit[2379]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.802000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe37c9bd80 a2=0 a3=7ffe37c9bd6c items=0 ppid=2264 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:45:42.804000 audit[2383]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.804000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcadb89850 a2=0 a3=7ffcadb8983c items=0 ppid=2264 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:45:42.805000 audit[2385]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.805000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8f6b5290 a2=0 a3=7ffe8f6b527c items=0 ppid=2264 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:45:42.807000 audit[2389]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2389 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.807000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe08896230 a2=0 a3=7ffe0889621c items=0 ppid=2264 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:45:42.811000 audit[2395]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.811000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff60c7ff90 a2=0 a3=7fff60c7ff7c items=0 ppid=2264 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:45:42.812000 audit[2397]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.812000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6d3032b0 a2=0 a3=7ffc6d30329c items=0 ppid=2264 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:45:42.814000 audit[2401]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.814000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9aa74660 a2=0 a3=7ffd9aa7464c items=0 ppid=2264 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:45:42.815000 audit[2403]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.815000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe887b7980 a2=0 a3=7ffe887b796c items=0 ppid=2264 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:45:42.817000 audit[2407]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.817000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff59716a20 a2=0 a3=7fff59716a0c items=0 ppid=2264 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:45:42.821000 audit[2413]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.821000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd2a186df0 a2=0 a3=7ffd2a186ddc items=0 ppid=2264 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:45:42.824000 audit[2420]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.824000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd790c47e0 a2=0 a3=7ffd790c47cc items=0 ppid=2264 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:45:42.826000 audit[2422]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.826000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5e4e0fa0 a2=0 a3=7ffc5e4e0f8c items=0 ppid=2264 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:45:42.828000 audit[2425]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.828000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe75033790 a2=0 a3=7ffe7503377c items=0 ppid=2264 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:45:42.829000 audit[2426]: NETFILTER_CFG table=filter:81 family=2 entries=14 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:42.829000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffb537db00 a2=0 a3=7fffb537daec items=0 ppid=2264 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.836000 audit[2426]: NETFILTER_CFG table=nat:82 family=2 entries=20 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:42.836000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fffb537db00 a2=0 a3=7fffb537daec items=0 ppid=2264 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.836000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.841000 audit[2429]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.841000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe59f329f0 a2=0 a3=7ffe59f329dc items=0 ppid=2264 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:45:42.842000 audit[2430]: NETFILTER_CFG table=nat:84 family=10 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.842000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee22320c0 a2=0 a3=7ffee22320ac items=0 ppid=2264 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:45:42.844000 audit[2432]: NETFILTER_CFG table=nat:85 family=10 entries=2 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.844000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffe14b1c60 a2=0 a3=7fffe14b1c4c items=0 ppid=2264 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.844000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:45:42.845000 audit[2433]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.845000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9225c630 a2=0 a3=7ffe9225c61c items=0 ppid=2264 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.845000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:45:42.847000 audit[2435]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.847000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcbe3db020 a2=0 a3=7ffcbe3db00c items=0 ppid=2264 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:45:42.850000 audit[2438]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:45:42.850000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf858e840 a2=0 a3=7ffcf858e82c items=0 ppid=2264 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.850000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:45:42.852000 audit[2440]: NETFILTER_CFG table=filter:89 family=10 entries=3 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:45:42.852000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffd66d347a0 a2=0 a3=7ffd66d3478c items=0 ppid=2264 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.852000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.853000 audit[2440]: NETFILTER_CFG table=nat:90 family=10 entries=7 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:45:42.853000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffd66d347a0 a2=0 a3=7ffd66d3478c items=0 ppid=2264 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.853000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.885000 audit[2470]: NETFILTER_CFG table=filter:91 family=10 entries=6 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:45:42.885000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffb0e64cc0 a2=0 a3=7fffb0e64cac items=0 ppid=2264 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.885000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:42.890000 audit[2470]: NETFILTER_CFG table=nat:92 family=10 entries=10 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:45:42.890000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffb0e64cc0 a2=0 a3=7fffb0e64cac items=0 ppid=2264 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:42.890000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:43.292090 kubelet[2110]: E1101 00:45:43.292055 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:43.300105 kubelet[2110]: I1101 00:45:43.300025 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7ms9w" podStartSLOduration=2.300008452 podStartE2EDuration="2.300008452s" podCreationTimestamp="2025-11-01 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:45:43.299948768 +0000 UTC m=+7.159406558" watchObservedRunningTime="2025-11-01 00:45:43.300008452 +0000 UTC m=+7.159466242" Nov 1 00:45:44.308378 kubelet[2110]: E1101 00:45:44.306598 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:44.455909 kubelet[2110]: E1101 00:45:44.453847 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:44.630828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378606460.mount: Deactivated successfully. Nov 1 00:45:45.308861 kubelet[2110]: E1101 00:45:45.307932 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:45.423881 env[1313]: time="2025-11-01T00:45:45.423811724Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:45.425671 env[1313]: time="2025-11-01T00:45:45.425636485Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:45.427389 env[1313]: time="2025-11-01T00:45:45.427312373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:45.428817 env[1313]: time="2025-11-01T00:45:45.428776309Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:45:45.429319 env[1313]: time="2025-11-01T00:45:45.429285287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:45:45.430989 env[1313]: time="2025-11-01T00:45:45.430958882Z" level=info msg="CreateContainer within sandbox \"63c168069900de1b1ca6e59c6e992b6a7bcf867af037b76546e9b802b3df3566\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:45:45.442881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119233490.mount: Deactivated successfully. Nov 1 00:45:45.444775 env[1313]: time="2025-11-01T00:45:45.444720496Z" level=info msg="CreateContainer within sandbox \"63c168069900de1b1ca6e59c6e992b6a7bcf867af037b76546e9b802b3df3566\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5877eae0005d2fc1ea44646f10e0923686d0b74e0a6e68f61080643e2035c65\"" Nov 1 00:45:45.445298 env[1313]: time="2025-11-01T00:45:45.445241427Z" level=info msg="StartContainer for \"d5877eae0005d2fc1ea44646f10e0923686d0b74e0a6e68f61080643e2035c65\"" Nov 1 00:45:45.489368 env[1313]: time="2025-11-01T00:45:45.486844975Z" level=info msg="StartContainer for \"d5877eae0005d2fc1ea44646f10e0923686d0b74e0a6e68f61080643e2035c65\" returns successfully" Nov 1 00:45:47.923617 update_engine[1295]: I1101 00:45:47.923548 1295 update_attempter.cc:509] Updating boot flags... Nov 1 00:45:48.642369 kubelet[2110]: E1101 00:45:48.639679 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:48.693870 kubelet[2110]: I1101 00:45:48.693817 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-76vkl" podStartSLOduration=4.49606358 podStartE2EDuration="7.693801624s" podCreationTimestamp="2025-11-01 00:45:41 +0000 UTC" firstStartedPulling="2025-11-01 00:45:42.232245452 +0000 UTC m=+6.091703232" lastFinishedPulling="2025-11-01 00:45:45.429983485 +0000 UTC m=+9.289441276" observedRunningTime="2025-11-01 00:45:46.609767404 +0000 UTC m=+10.469225224" watchObservedRunningTime="2025-11-01 00:45:48.693801624 +0000 UTC m=+12.553259414" Nov 1 00:45:49.316546 kubelet[2110]: E1101 00:45:49.316493 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:50.456191 kubelet[2110]: E1101 00:45:50.456150 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:51.319578 kubelet[2110]: E1101 00:45:51.319519 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:52.441948 sudo[1468]: pam_unix(sudo:session): session closed for user root Nov 1 00:45:52.440000 audit[1468]: USER_END pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.444133 kernel: kauditd_printk_skb: 155 callbacks suppressed Nov 1 00:45:52.444209 kernel: audit: type=1106 audit(1761957952.440:266): pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.440000 audit[1468]: CRED_DISP pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.458363 kernel: audit: type=1104 audit(1761957952.440:267): pid=1468 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.463029 sshd[1464]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:52.462000 audit[1464]: USER_END pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:52.466058 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:54826.service: Deactivated successfully. Nov 1 00:45:52.467919 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:45:52.468581 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:45:52.470031 systemd-logind[1290]: Removed session 7. Nov 1 00:45:52.462000 audit[1464]: CRED_DISP pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:52.478406 kernel: audit: type=1106 audit(1761957952.462:268): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:52.478489 kernel: audit: type=1104 audit(1761957952.462:269): pid=1464 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:45:52.478518 kernel: audit: type=1131 audit(1761957952.465:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:54826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.128:22-10.0.0.1:54826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:52.769000 audit[2575]: NETFILTER_CFG table=filter:93 family=2 entries=15 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.769000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeab8ece60 a2=0 a3=7ffeab8ece4c items=0 ppid=2264 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.785456 kernel: audit: type=1325 audit(1761957952.769:271): table=filter:93 family=2 entries=15 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.785521 kernel: audit: type=1300 audit(1761957952.769:271): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeab8ece60 a2=0 a3=7ffeab8ece4c items=0 ppid=2264 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:52.789000 audit[2575]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.796118 kernel: audit: type=1327 audit(1761957952.769:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:52.796224 kernel: audit: type=1325 audit(1761957952.789:272): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.796377 kernel: audit: type=1300 audit(1761957952.789:272): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeab8ece60 a2=0 a3=0 items=0 ppid=2264 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.789000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeab8ece60 a2=0 a3=0 items=0 ppid=2264 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:52.816000 audit[2577]: NETFILTER_CFG table=filter:95 family=2 entries=16 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.816000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe4a4a2bf0 a2=0 a3=7ffe4a4a2bdc items=0 ppid=2264 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:52.821000 audit[2577]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:52.821000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4a4a2bf0 a2=0 a3=0 items=0 ppid=2264 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:52.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:54.594000 audit[2579]: NETFILTER_CFG table=filter:97 family=2 entries=17 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:54.594000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd1527db40 a2=0 a3=7ffd1527db2c items=0 ppid=2264 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:54.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:54.599000 audit[2579]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:54.599000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1527db40 a2=0 a3=0 items=0 ppid=2264 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:54.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:54.786000 audit[2581]: NETFILTER_CFG table=filter:99 family=2 entries=19 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:54.786000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc2c86cb0 a2=0 a3=7ffcc2c86c9c items=0 ppid=2264 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:54.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:54.791000 audit[2581]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:54.791000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcc2c86cb0 a2=0 a3=0 items=0 ppid=2264 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:54.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:55.805000 audit[2583]: NETFILTER_CFG table=filter:101 family=2 entries=20 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:55.805000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffea49ccbc0 a2=0 a3=7ffea49ccbac items=0 ppid=2264 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:55.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:55.812000 audit[2583]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:55.812000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffea49ccbc0 a2=0 a3=0 items=0 ppid=2264 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:55.812000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:57.144000 audit[2585]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:57.144000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffef22b2ba0 a2=0 a3=7ffef22b2b8c items=0 ppid=2264 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:57.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:57.149000 audit[2585]: NETFILTER_CFG table=nat:104 family=2 entries=12 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:57.149000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef22b2ba0 a2=0 a3=0 items=0 ppid=2264 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:57.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:57.297047 kubelet[2110]: I1101 00:45:57.296962 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226-tigera-ca-bundle\") pod \"calico-typha-564849f57c-6j2tw\" (UID: \"5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226\") " pod="calico-system/calico-typha-564849f57c-6j2tw" Nov 1 00:45:57.297047 kubelet[2110]: I1101 00:45:57.297011 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226-typha-certs\") pod \"calico-typha-564849f57c-6j2tw\" (UID: \"5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226\") " pod="calico-system/calico-typha-564849f57c-6j2tw" Nov 1 00:45:57.297047 kubelet[2110]: I1101 00:45:57.297031 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvtxr\" (UniqueName: \"kubernetes.io/projected/5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226-kube-api-access-mvtxr\") pod \"calico-typha-564849f57c-6j2tw\" (UID: \"5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226\") " pod="calico-system/calico-typha-564849f57c-6j2tw" Nov 1 00:45:57.473555 kubelet[2110]: E1101 00:45:57.473520 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:57.474006 env[1313]: time="2025-11-01T00:45:57.473967482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564849f57c-6j2tw,Uid:5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226,Namespace:calico-system,Attempt:0,}" Nov 1 00:45:57.700059 kubelet[2110]: I1101 00:45:57.700007 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-xtables-lock\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700059 kubelet[2110]: I1101 00:45:57.700045 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-lib-modules\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700237 kubelet[2110]: I1101 00:45:57.700067 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbs8j\" (UniqueName: \"kubernetes.io/projected/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-kube-api-access-tbs8j\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700237 kubelet[2110]: I1101 00:45:57.700103 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-flexvol-driver-host\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700237 kubelet[2110]: I1101 00:45:57.700134 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-var-lib-calico\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700237 kubelet[2110]: I1101 00:45:57.700156 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-cni-net-dir\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700237 kubelet[2110]: I1101 00:45:57.700175 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-tigera-ca-bundle\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700367 kubelet[2110]: I1101 00:45:57.700195 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-cni-bin-dir\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700367 kubelet[2110]: I1101 00:45:57.700210 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-policysync\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700367 kubelet[2110]: I1101 00:45:57.700230 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-var-run-calico\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700367 kubelet[2110]: I1101 00:45:57.700253 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-node-certs\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.700367 kubelet[2110]: I1101 00:45:57.700283 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8-cni-log-dir\") pod \"calico-node-s7tp8\" (UID: \"9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8\") " pod="calico-system/calico-node-s7tp8" Nov 1 00:45:57.801850 kubelet[2110]: E1101 00:45:57.801757 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:57.801850 kubelet[2110]: W1101 00:45:57.801787 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:57.801850 kubelet[2110]: E1101 00:45:57.801836 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:57.807260 kubelet[2110]: E1101 00:45:57.803629 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:57.807260 kubelet[2110]: W1101 00:45:57.803666 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:57.807260 kubelet[2110]: E1101 00:45:57.803703 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:57.813180 kubelet[2110]: E1101 00:45:57.811295 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:57.813180 kubelet[2110]: W1101 00:45:57.811318 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:57.813180 kubelet[2110]: E1101 00:45:57.811374 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:57.833628 env[1313]: time="2025-11-01T00:45:57.833564965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:57.833816 env[1313]: time="2025-11-01T00:45:57.833599620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:57.833816 env[1313]: time="2025-11-01T00:45:57.833609097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:57.833816 env[1313]: time="2025-11-01T00:45:57.833755674Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6739b85dc90655dde1439cb8b4abc9e9fa4a843bc2f975ad9088c275427f02cc pid=2612 runtime=io.containerd.runc.v2 Nov 1 00:45:57.859561 kubelet[2110]: E1101 00:45:57.859532 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:57.860726 env[1313]: time="2025-11-01T00:45:57.860686088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7tp8,Uid:9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8,Namespace:calico-system,Attempt:0,}" Nov 1 00:45:57.876846 env[1313]: time="2025-11-01T00:45:57.876800611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-564849f57c-6j2tw,Uid:5cb59b08-1dff-4cf6-9ce8-6e1b1c41a226,Namespace:calico-system,Attempt:0,} returns sandbox id \"6739b85dc90655dde1439cb8b4abc9e9fa4a843bc2f975ad9088c275427f02cc\"" Nov 1 00:45:57.877476 kubelet[2110]: E1101 00:45:57.877454 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:57.878469 env[1313]: time="2025-11-01T00:45:57.878138437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:45:58.070167 env[1313]: time="2025-11-01T00:45:58.069981734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:45:58.070167 env[1313]: time="2025-11-01T00:45:58.070024404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:45:58.070167 env[1313]: time="2025-11-01T00:45:58.070035455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:45:58.070372 env[1313]: time="2025-11-01T00:45:58.070219161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf pid=2654 runtime=io.containerd.runc.v2 Nov 1 00:45:58.086652 kubelet[2110]: E1101 00:45:58.086085 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:45:58.102479 kubelet[2110]: E1101 00:45:58.102329 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.102479 kubelet[2110]: W1101 00:45:58.102366 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.102479 kubelet[2110]: E1101 00:45:58.102390 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.102929 kubelet[2110]: E1101 00:45:58.102819 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.102929 kubelet[2110]: W1101 00:45:58.102831 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.102929 kubelet[2110]: E1101 00:45:58.102843 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.103280 kubelet[2110]: E1101 00:45:58.103124 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.103280 kubelet[2110]: W1101 00:45:58.103136 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.103280 kubelet[2110]: E1101 00:45:58.103149 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.103743 kubelet[2110]: E1101 00:45:58.103628 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.103743 kubelet[2110]: W1101 00:45:58.103640 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.103743 kubelet[2110]: E1101 00:45:58.103652 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.104029 kubelet[2110]: E1101 00:45:58.103926 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.104029 kubelet[2110]: W1101 00:45:58.103938 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.104029 kubelet[2110]: E1101 00:45:58.103949 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.104329 kubelet[2110]: E1101 00:45:58.104220 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.104329 kubelet[2110]: W1101 00:45:58.104232 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.104329 kubelet[2110]: E1101 00:45:58.104247 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.104636 kubelet[2110]: E1101 00:45:58.104525 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.104636 kubelet[2110]: W1101 00:45:58.104537 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.104636 kubelet[2110]: E1101 00:45:58.104548 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.104913 kubelet[2110]: E1101 00:45:58.104801 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.104913 kubelet[2110]: W1101 00:45:58.104813 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.104913 kubelet[2110]: E1101 00:45:58.104824 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.105212 kubelet[2110]: E1101 00:45:58.105094 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.105212 kubelet[2110]: W1101 00:45:58.105118 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.105212 kubelet[2110]: E1101 00:45:58.105130 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.105519 kubelet[2110]: E1101 00:45:58.105412 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.105519 kubelet[2110]: W1101 00:45:58.105424 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.105519 kubelet[2110]: E1101 00:45:58.105437 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.105816 kubelet[2110]: E1101 00:45:58.105703 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.105816 kubelet[2110]: W1101 00:45:58.105715 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.105816 kubelet[2110]: E1101 00:45:58.105729 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.106134 kubelet[2110]: E1101 00:45:58.106005 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.106134 kubelet[2110]: W1101 00:45:58.106018 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.106134 kubelet[2110]: E1101 00:45:58.106030 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.106465 kubelet[2110]: E1101 00:45:58.106321 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.106465 kubelet[2110]: W1101 00:45:58.106334 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.106465 kubelet[2110]: E1101 00:45:58.106389 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.106655 kubelet[2110]: E1101 00:45:58.106641 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.106830 kubelet[2110]: W1101 00:45:58.106728 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.106830 kubelet[2110]: E1101 00:45:58.106748 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.107004 kubelet[2110]: E1101 00:45:58.106989 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.107097 kubelet[2110]: W1101 00:45:58.107078 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.107209 kubelet[2110]: E1101 00:45:58.107191 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.107481 kubelet[2110]: E1101 00:45:58.107468 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.107570 kubelet[2110]: W1101 00:45:58.107553 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.107647 kubelet[2110]: E1101 00:45:58.107632 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.107984 kubelet[2110]: E1101 00:45:58.107971 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.108070 kubelet[2110]: W1101 00:45:58.108053 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.108164 kubelet[2110]: E1101 00:45:58.108148 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.109491 kubelet[2110]: E1101 00:45:58.109479 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.109588 kubelet[2110]: W1101 00:45:58.109570 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.109680 kubelet[2110]: E1101 00:45:58.109663 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.109965 env[1313]: time="2025-11-01T00:45:58.109915676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s7tp8,Uid:9b15ed6c-1025-4a29-9aa8-2d7b7f89bec8,Namespace:calico-system,Attempt:0,} returns sandbox id \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\"" Nov 1 00:45:58.110206 kubelet[2110]: E1101 00:45:58.110086 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.110206 kubelet[2110]: W1101 00:45:58.110096 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.110206 kubelet[2110]: E1101 00:45:58.110118 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.110459 kubelet[2110]: E1101 00:45:58.110447 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.110540 kubelet[2110]: W1101 00:45:58.110524 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.110619 kubelet[2110]: E1101 00:45:58.110603 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.111368 kubelet[2110]: E1101 00:45:58.111330 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:45:58.183065 kernel: kauditd_printk_skb: 31 callbacks suppressed Nov 1 00:45:58.183275 kernel: audit: type=1325 audit(1761957958.179:283): table=filter:105 family=2 entries=22 op=nft_register_rule pid=2718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:58.179000 audit[2718]: NETFILTER_CFG table=filter:105 family=2 entries=22 op=nft_register_rule pid=2718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:58.179000 audit[2718]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffde00d84f0 a2=0 a3=7ffde00d84dc items=0 ppid=2264 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:58.196280 kernel: audit: type=1300 audit(1761957958.179:283): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffde00d84f0 a2=0 a3=7ffde00d84dc items=0 ppid=2264 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:58.196426 kernel: audit: type=1327 audit(1761957958.179:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:58.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:58.202998 kubelet[2110]: E1101 00:45:58.202976 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.202998 kubelet[2110]: W1101 00:45:58.202994 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.203135 kubelet[2110]: E1101 00:45:58.203012 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.203135 kubelet[2110]: I1101 00:45:58.203041 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/323323dc-c361-4116-a022-8e5f45430869-registration-dir\") pod \"csi-node-driver-zk5w7\" (UID: \"323323dc-c361-4116-a022-8e5f45430869\") " pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:45:58.203243 kubelet[2110]: E1101 00:45:58.203224 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.203243 kubelet[2110]: W1101 00:45:58.203235 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.203372 kubelet[2110]: E1101 00:45:58.203249 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.203372 kubelet[2110]: I1101 00:45:58.203264 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/323323dc-c361-4116-a022-8e5f45430869-varrun\") pod \"csi-node-driver-zk5w7\" (UID: \"323323dc-c361-4116-a022-8e5f45430869\") " pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:45:58.203593 kubelet[2110]: E1101 00:45:58.203574 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.203593 kubelet[2110]: W1101 00:45:58.203585 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.203593 kubelet[2110]: E1101 00:45:58.203596 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.203716 kubelet[2110]: I1101 00:45:58.203609 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/323323dc-c361-4116-a022-8e5f45430869-kubelet-dir\") pod \"csi-node-driver-zk5w7\" (UID: \"323323dc-c361-4116-a022-8e5f45430869\") " pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:45:58.203885 kubelet[2110]: E1101 00:45:58.203862 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.203943 kubelet[2110]: W1101 00:45:58.203887 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.203943 kubelet[2110]: E1101 00:45:58.203926 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.204167 kubelet[2110]: E1101 00:45:58.204153 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.204228 kubelet[2110]: W1101 00:45:58.204166 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.204228 kubelet[2110]: E1101 00:45:58.204190 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.204416 kubelet[2110]: E1101 00:45:58.204400 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.204416 kubelet[2110]: W1101 00:45:58.204412 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.204526 kubelet[2110]: E1101 00:45:58.204436 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.204647 kubelet[2110]: E1101 00:45:58.204625 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.204647 kubelet[2110]: W1101 00:45:58.204642 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.204747 kubelet[2110]: E1101 00:45:58.204667 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.204867 kubelet[2110]: E1101 00:45:58.204851 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.204867 kubelet[2110]: W1101 00:45:58.204862 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.204970 kubelet[2110]: E1101 00:45:58.204884 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.204970 kubelet[2110]: I1101 00:45:58.204915 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq9vl\" (UniqueName: \"kubernetes.io/projected/323323dc-c361-4116-a022-8e5f45430869-kube-api-access-lq9vl\") pod \"csi-node-driver-zk5w7\" (UID: \"323323dc-c361-4116-a022-8e5f45430869\") " pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:45:58.203000 audit[2718]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=2718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:58.205151 kubelet[2110]: E1101 00:45:58.205137 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.205151 kubelet[2110]: W1101 00:45:58.205149 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.205231 kubelet[2110]: E1101 00:45:58.205200 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.205269 kubelet[2110]: I1101 00:45:58.205260 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/323323dc-c361-4116-a022-8e5f45430869-socket-dir\") pod \"csi-node-driver-zk5w7\" (UID: \"323323dc-c361-4116-a022-8e5f45430869\") " pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:45:58.205417 kubelet[2110]: E1101 00:45:58.205397 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.205417 kubelet[2110]: W1101 00:45:58.205413 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.205534 kubelet[2110]: E1101 00:45:58.205456 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.205628 kubelet[2110]: E1101 00:45:58.205610 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.205628 kubelet[2110]: W1101 00:45:58.205620 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.205732 kubelet[2110]: E1101 00:45:58.205632 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.205824 kubelet[2110]: E1101 00:45:58.205810 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.205824 kubelet[2110]: W1101 00:45:58.205819 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.205901 kubelet[2110]: E1101 00:45:58.205830 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.206018 kubelet[2110]: E1101 00:45:58.205993 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.206018 kubelet[2110]: W1101 00:45:58.206013 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.206018 kubelet[2110]: E1101 00:45:58.206020 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.206232 kubelet[2110]: E1101 00:45:58.206217 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.206232 kubelet[2110]: W1101 00:45:58.206230 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.206318 kubelet[2110]: E1101 00:45:58.206241 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.206465 kubelet[2110]: E1101 00:45:58.206443 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.206465 kubelet[2110]: W1101 00:45:58.206461 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.206550 kubelet[2110]: E1101 00:45:58.206476 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.203000 audit[2718]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde00d84f0 a2=0 a3=0 items=0 ppid=2264 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:58.218071 kernel: audit: type=1325 audit(1761957958.203:284): table=nat:106 family=2 entries=12 op=nft_register_rule pid=2718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:58.218132 kernel: audit: type=1300 audit(1761957958.203:284): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde00d84f0 a2=0 a3=0 items=0 ppid=2264 pid=2718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:58.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:58.222482 kernel: audit: type=1327 audit(1761957958.203:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:58.306165 kubelet[2110]: E1101 00:45:58.306132 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.306165 kubelet[2110]: W1101 00:45:58.306152 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.306165 kubelet[2110]: E1101 00:45:58.306175 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.306624 kubelet[2110]: E1101 00:45:58.306443 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.306624 kubelet[2110]: W1101 00:45:58.306460 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.306624 kubelet[2110]: E1101 00:45:58.306484 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.306710 kubelet[2110]: E1101 00:45:58.306682 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.306710 kubelet[2110]: W1101 00:45:58.306697 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.306773 kubelet[2110]: E1101 00:45:58.306713 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.306912 kubelet[2110]: E1101 00:45:58.306899 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.306912 kubelet[2110]: W1101 00:45:58.306909 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.306973 kubelet[2110]: E1101 00:45:58.306921 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307092 kubelet[2110]: E1101 00:45:58.307077 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307092 kubelet[2110]: W1101 00:45:58.307089 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.307179 kubelet[2110]: E1101 00:45:58.307113 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307286 kubelet[2110]: E1101 00:45:58.307270 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307286 kubelet[2110]: W1101 00:45:58.307281 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.307380 kubelet[2110]: E1101 00:45:58.307292 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307556 kubelet[2110]: E1101 00:45:58.307432 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307556 kubelet[2110]: W1101 00:45:58.307442 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.307556 kubelet[2110]: E1101 00:45:58.307449 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307654 kubelet[2110]: E1101 00:45:58.307645 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307682 kubelet[2110]: W1101 00:45:58.307654 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.307682 kubelet[2110]: E1101 00:45:58.307676 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307877 kubelet[2110]: E1101 00:45:58.307862 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307877 kubelet[2110]: W1101 00:45:58.307871 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.307948 kubelet[2110]: E1101 00:45:58.307901 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.307992 kubelet[2110]: E1101 00:45:58.307980 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.307992 kubelet[2110]: W1101 00:45:58.307988 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308040 kubelet[2110]: E1101 00:45:58.308007 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.308124 kubelet[2110]: E1101 00:45:58.308113 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.308124 kubelet[2110]: W1101 00:45:58.308121 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308170 kubelet[2110]: E1101 00:45:58.308132 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.308308 kubelet[2110]: E1101 00:45:58.308292 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.308308 kubelet[2110]: W1101 00:45:58.308303 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308402 kubelet[2110]: E1101 00:45:58.308315 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.308542 kubelet[2110]: E1101 00:45:58.308513 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.308542 kubelet[2110]: W1101 00:45:58.308533 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308698 kubelet[2110]: E1101 00:45:58.308559 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.308738 kubelet[2110]: E1101 00:45:58.308721 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.308738 kubelet[2110]: W1101 00:45:58.308730 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308797 kubelet[2110]: E1101 00:45:58.308743 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.308925 kubelet[2110]: E1101 00:45:58.308915 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.308950 kubelet[2110]: W1101 00:45:58.308925 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.308950 kubelet[2110]: E1101 00:45:58.308937 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309090 kubelet[2110]: E1101 00:45:58.309078 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309090 kubelet[2110]: W1101 00:45:58.309088 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.309154 kubelet[2110]: E1101 00:45:58.309109 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309270 kubelet[2110]: E1101 00:45:58.309259 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309270 kubelet[2110]: W1101 00:45:58.309267 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.309315 kubelet[2110]: E1101 00:45:58.309278 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309443 kubelet[2110]: E1101 00:45:58.309432 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309443 kubelet[2110]: W1101 00:45:58.309441 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.309499 kubelet[2110]: E1101 00:45:58.309453 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309592 kubelet[2110]: E1101 00:45:58.309583 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309592 kubelet[2110]: W1101 00:45:58.309590 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.309680 kubelet[2110]: E1101 00:45:58.309601 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309734 kubelet[2110]: E1101 00:45:58.309725 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309761 kubelet[2110]: W1101 00:45:58.309733 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.309761 kubelet[2110]: E1101 00:45:58.309745 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.309947 kubelet[2110]: E1101 00:45:58.309933 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.309947 kubelet[2110]: W1101 00:45:58.309944 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.310016 kubelet[2110]: E1101 00:45:58.309956 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.310148 kubelet[2110]: E1101 00:45:58.310137 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.310148 kubelet[2110]: W1101 00:45:58.310145 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.310194 kubelet[2110]: E1101 00:45:58.310156 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.310317 kubelet[2110]: E1101 00:45:58.310304 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.310317 kubelet[2110]: W1101 00:45:58.310315 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.310375 kubelet[2110]: E1101 00:45:58.310326 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.310501 kubelet[2110]: E1101 00:45:58.310489 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.310501 kubelet[2110]: W1101 00:45:58.310498 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.310545 kubelet[2110]: E1101 00:45:58.310505 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.338610 kubelet[2110]: E1101 00:45:58.338514 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.338610 kubelet[2110]: W1101 00:45:58.338530 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.338610 kubelet[2110]: E1101 00:45:58.338540 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:45:58.360951 kubelet[2110]: E1101 00:45:58.360906 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:45:58.360951 kubelet[2110]: W1101 00:45:58.360937 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:45:58.361131 kubelet[2110]: E1101 00:45:58.360963 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:00.037667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514233507.mount: Deactivated successfully. Nov 1 00:46:00.269010 kubelet[2110]: E1101 00:46:00.268960 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:01.785099 env[1313]: time="2025-11-01T00:46:01.785025418Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:01.904162 env[1313]: time="2025-11-01T00:46:01.904088260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:02.070474 env[1313]: time="2025-11-01T00:46:02.070319591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:02.093221 env[1313]: time="2025-11-01T00:46:02.093064814Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:02.093506 env[1313]: time="2025-11-01T00:46:02.093404684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:46:02.098577 env[1313]: time="2025-11-01T00:46:02.098533941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:46:02.112188 env[1313]: time="2025-11-01T00:46:02.111683310Z" level=info msg="CreateContainer within sandbox \"6739b85dc90655dde1439cb8b4abc9e9fa4a843bc2f975ad9088c275427f02cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:46:02.269369 kubelet[2110]: E1101 00:46:02.269259 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:02.300876 env[1313]: time="2025-11-01T00:46:02.300776702Z" level=info msg="CreateContainer within sandbox \"6739b85dc90655dde1439cb8b4abc9e9fa4a843bc2f975ad9088c275427f02cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"69e1ed97e20291dc333672ff321eb305358d6850e664f5b8096a6abc7d8be8c7\"" Nov 1 00:46:02.302489 env[1313]: time="2025-11-01T00:46:02.302423506Z" level=info msg="StartContainer for \"69e1ed97e20291dc333672ff321eb305358d6850e664f5b8096a6abc7d8be8c7\"" Nov 1 00:46:02.727850 env[1313]: time="2025-11-01T00:46:02.727764261Z" level=info msg="StartContainer for \"69e1ed97e20291dc333672ff321eb305358d6850e664f5b8096a6abc7d8be8c7\" returns successfully" Nov 1 00:46:03.345548 kubelet[2110]: E1101 00:46:03.345514 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:03.445931 kubelet[2110]: E1101 00:46:03.445876 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.445931 kubelet[2110]: W1101 00:46:03.445908 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.445931 kubelet[2110]: E1101 00:46:03.445935 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.446215 kubelet[2110]: E1101 00:46:03.446179 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.446215 kubelet[2110]: W1101 00:46:03.446189 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.446215 kubelet[2110]: E1101 00:46:03.446199 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.446377 kubelet[2110]: E1101 00:46:03.446361 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.446377 kubelet[2110]: W1101 00:46:03.446373 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.446439 kubelet[2110]: E1101 00:46:03.446386 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.446631 kubelet[2110]: E1101 00:46:03.446615 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.446631 kubelet[2110]: W1101 00:46:03.446626 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.446729 kubelet[2110]: E1101 00:46:03.446636 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.446809 kubelet[2110]: E1101 00:46:03.446794 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.446809 kubelet[2110]: W1101 00:46:03.446806 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.446869 kubelet[2110]: E1101 00:46:03.446814 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.446957 kubelet[2110]: E1101 00:46:03.446940 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.446957 kubelet[2110]: W1101 00:46:03.446951 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447054 kubelet[2110]: E1101 00:46:03.446959 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447123 kubelet[2110]: E1101 00:46:03.447109 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447123 kubelet[2110]: W1101 00:46:03.447119 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447181 kubelet[2110]: E1101 00:46:03.447128 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447273 kubelet[2110]: E1101 00:46:03.447259 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447273 kubelet[2110]: W1101 00:46:03.447270 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447338 kubelet[2110]: E1101 00:46:03.447278 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447450 kubelet[2110]: E1101 00:46:03.447436 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447450 kubelet[2110]: W1101 00:46:03.447448 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447520 kubelet[2110]: E1101 00:46:03.447459 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447599 kubelet[2110]: E1101 00:46:03.447584 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447599 kubelet[2110]: W1101 00:46:03.447595 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447667 kubelet[2110]: E1101 00:46:03.447604 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447742 kubelet[2110]: E1101 00:46:03.447728 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447742 kubelet[2110]: W1101 00:46:03.447738 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447805 kubelet[2110]: E1101 00:46:03.447747 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.447891 kubelet[2110]: E1101 00:46:03.447877 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.447891 kubelet[2110]: W1101 00:46:03.447888 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.447949 kubelet[2110]: E1101 00:46:03.447897 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.448068 kubelet[2110]: E1101 00:46:03.448052 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.448068 kubelet[2110]: W1101 00:46:03.448065 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.448138 kubelet[2110]: E1101 00:46:03.448076 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.448239 kubelet[2110]: E1101 00:46:03.448214 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.448239 kubelet[2110]: W1101 00:46:03.448227 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.448239 kubelet[2110]: E1101 00:46:03.448235 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.448503 kubelet[2110]: E1101 00:46:03.448392 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.448503 kubelet[2110]: W1101 00:46:03.448399 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.448503 kubelet[2110]: E1101 00:46:03.448407 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.546392 kubelet[2110]: E1101 00:46:03.546342 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.546392 kubelet[2110]: W1101 00:46:03.546382 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.546618 kubelet[2110]: E1101 00:46:03.546403 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.546618 kubelet[2110]: E1101 00:46:03.546589 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.546618 kubelet[2110]: W1101 00:46:03.546601 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.546715 kubelet[2110]: E1101 00:46:03.546627 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.546847 kubelet[2110]: E1101 00:46:03.546830 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.546847 kubelet[2110]: W1101 00:46:03.546845 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.546941 kubelet[2110]: E1101 00:46:03.546862 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.547100 kubelet[2110]: E1101 00:46:03.547079 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.547100 kubelet[2110]: W1101 00:46:03.547090 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.547171 kubelet[2110]: E1101 00:46:03.547104 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.547272 kubelet[2110]: E1101 00:46:03.547255 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.547272 kubelet[2110]: W1101 00:46:03.547269 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.547382 kubelet[2110]: E1101 00:46:03.547283 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.547472 kubelet[2110]: E1101 00:46:03.547459 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.547511 kubelet[2110]: W1101 00:46:03.547471 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.547511 kubelet[2110]: E1101 00:46:03.547486 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.547704 kubelet[2110]: E1101 00:46:03.547686 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.547704 kubelet[2110]: W1101 00:46:03.547703 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.547791 kubelet[2110]: E1101 00:46:03.547732 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.547900 kubelet[2110]: E1101 00:46:03.547887 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.547900 kubelet[2110]: W1101 00:46:03.547899 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548014 kubelet[2110]: E1101 00:46:03.547923 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.548062 kubelet[2110]: E1101 00:46:03.548047 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.548062 kubelet[2110]: W1101 00:46:03.548060 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548156 kubelet[2110]: E1101 00:46:03.548080 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.548261 kubelet[2110]: E1101 00:46:03.548247 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.548261 kubelet[2110]: W1101 00:46:03.548259 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548381 kubelet[2110]: E1101 00:46:03.548273 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.548483 kubelet[2110]: E1101 00:46:03.548468 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.548483 kubelet[2110]: W1101 00:46:03.548479 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548570 kubelet[2110]: E1101 00:46:03.548492 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.548670 kubelet[2110]: E1101 00:46:03.548655 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.548670 kubelet[2110]: W1101 00:46:03.548666 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548759 kubelet[2110]: E1101 00:46:03.548680 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.548912 kubelet[2110]: E1101 00:46:03.548897 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.548912 kubelet[2110]: W1101 00:46:03.548908 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.548991 kubelet[2110]: E1101 00:46:03.548921 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.549111 kubelet[2110]: E1101 00:46:03.549096 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.549111 kubelet[2110]: W1101 00:46:03.549106 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.549206 kubelet[2110]: E1101 00:46:03.549119 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.549287 kubelet[2110]: E1101 00:46:03.549274 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.549287 kubelet[2110]: W1101 00:46:03.549284 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.549360 kubelet[2110]: E1101 00:46:03.549295 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.549484 kubelet[2110]: E1101 00:46:03.549469 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.549484 kubelet[2110]: W1101 00:46:03.549479 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.549571 kubelet[2110]: E1101 00:46:03.549493 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.549722 kubelet[2110]: E1101 00:46:03.549708 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.549722 kubelet[2110]: W1101 00:46:03.549718 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.549804 kubelet[2110]: E1101 00:46:03.549727 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.549876 kubelet[2110]: E1101 00:46:03.549863 2110 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:46:03.549876 kubelet[2110]: W1101 00:46:03.549873 2110 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:46:03.549942 kubelet[2110]: E1101 00:46:03.549883 2110 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:46:03.915312 env[1313]: time="2025-11-01T00:46:03.915248779Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:03.919577 env[1313]: time="2025-11-01T00:46:03.919524113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:04.123138 env[1313]: time="2025-11-01T00:46:04.122965083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:04.125692 env[1313]: time="2025-11-01T00:46:04.125640361Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:04.126024 env[1313]: time="2025-11-01T00:46:04.125971635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:46:04.128136 env[1313]: time="2025-11-01T00:46:04.128090185Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:46:04.143758 env[1313]: time="2025-11-01T00:46:04.143688704Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1\"" Nov 1 00:46:04.144363 env[1313]: time="2025-11-01T00:46:04.144284456Z" level=info msg="StartContainer for \"0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1\"" Nov 1 00:46:04.202058 env[1313]: time="2025-11-01T00:46:04.201929243Z" level=info msg="StartContainer for \"0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1\" returns successfully" Nov 1 00:46:04.229892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1-rootfs.mount: Deactivated successfully. Nov 1 00:46:04.252784 env[1313]: time="2025-11-01T00:46:04.252728808Z" level=info msg="shim disconnected" id=0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1 Nov 1 00:46:04.252784 env[1313]: time="2025-11-01T00:46:04.252787367Z" level=warning msg="cleaning up after shim disconnected" id=0cd2bf6078734c6b27d61dd6e648aff462cdf37e96de30983634af882e22e0f1 namespace=k8s.io Nov 1 00:46:04.253009 env[1313]: time="2025-11-01T00:46:04.252800963Z" level=info msg="cleaning up dead shim" Nov 1 00:46:04.262162 env[1313]: time="2025-11-01T00:46:04.262097342Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:46:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2884 runtime=io.containerd.runc.v2\n" Nov 1 00:46:04.269044 kubelet[2110]: E1101 00:46:04.268974 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:04.348455 kubelet[2110]: I1101 00:46:04.348420 2110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:46:04.348883 kubelet[2110]: E1101 00:46:04.348713 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:04.349130 kubelet[2110]: E1101 00:46:04.348713 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:04.350909 env[1313]: time="2025-11-01T00:46:04.350867499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:46:04.372682 kubelet[2110]: I1101 00:46:04.372622 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-564849f57c-6j2tw" podStartSLOduration=3.152151836 podStartE2EDuration="7.372592686s" podCreationTimestamp="2025-11-01 00:45:57 +0000 UTC" firstStartedPulling="2025-11-01 00:45:57.877863538 +0000 UTC m=+21.737321328" lastFinishedPulling="2025-11-01 00:46:02.098304388 +0000 UTC m=+25.957762178" observedRunningTime="2025-11-01 00:46:03.563898528 +0000 UTC m=+27.423356318" watchObservedRunningTime="2025-11-01 00:46:04.372592686 +0000 UTC m=+28.232050476" Nov 1 00:46:06.256299 kubelet[2110]: I1101 00:46:06.256261 2110 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:46:06.256738 kubelet[2110]: E1101 00:46:06.256664 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:06.268832 kubelet[2110]: E1101 00:46:06.268771 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:06.335000 audit[2908]: NETFILTER_CFG table=filter:107 family=2 entries=21 op=nft_register_rule pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:06.335000 audit[2908]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6cf51750 a2=0 a3=7ffd6cf5173c items=0 ppid=2264 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:06.350227 kernel: audit: type=1325 audit(1761957966.335:285): table=filter:107 family=2 entries=21 op=nft_register_rule pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:06.350405 kernel: audit: type=1300 audit(1761957966.335:285): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd6cf51750 a2=0 a3=7ffd6cf5173c items=0 ppid=2264 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:06.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:06.354074 kubelet[2110]: E1101 00:46:06.354038 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:06.359374 kernel: audit: type=1327 audit(1761957966.335:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:06.359428 kernel: audit: type=1325 audit(1761957966.352:286): table=nat:108 family=2 entries=19 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:06.352000 audit[2908]: NETFILTER_CFG table=nat:108 family=2 entries=19 op=nft_register_chain pid=2908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:06.369402 kernel: audit: type=1300 audit(1761957966.352:286): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd6cf51750 a2=0 a3=7ffd6cf5173c items=0 ppid=2264 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:06.352000 audit[2908]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd6cf51750 a2=0 a3=7ffd6cf5173c items=0 ppid=2264 pid=2908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:06.352000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:06.373916 kernel: audit: type=1327 audit(1761957966.352:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:08.269801 kubelet[2110]: E1101 00:46:08.269723 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:09.668650 env[1313]: time="2025-11-01T00:46:09.668578665Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:09.670880 env[1313]: time="2025-11-01T00:46:09.670774105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:09.672658 env[1313]: time="2025-11-01T00:46:09.672594761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:09.674675 env[1313]: time="2025-11-01T00:46:09.674596967Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:09.675315 env[1313]: time="2025-11-01T00:46:09.675276836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:46:09.677952 env[1313]: time="2025-11-01T00:46:09.677433383Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:46:09.694138 env[1313]: time="2025-11-01T00:46:09.694082322Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a\"" Nov 1 00:46:09.694713 env[1313]: time="2025-11-01T00:46:09.694684204Z" level=info msg="StartContainer for \"16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a\"" Nov 1 00:46:10.268531 kubelet[2110]: E1101 00:46:10.268469 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:10.832239 env[1313]: time="2025-11-01T00:46:10.832139682Z" level=info msg="StartContainer for \"16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a\" returns successfully" Nov 1 00:46:11.622140 env[1313]: time="2025-11-01T00:46:11.622037119Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:46:11.627687 kubelet[2110]: I1101 00:46:11.627653 2110 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:46:11.657243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a-rootfs.mount: Deactivated successfully. Nov 1 00:46:11.666019 env[1313]: time="2025-11-01T00:46:11.665942753Z" level=info msg="shim disconnected" id=16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a Nov 1 00:46:11.666019 env[1313]: time="2025-11-01T00:46:11.666008536Z" level=warning msg="cleaning up after shim disconnected" id=16c1fc0bb51e5645aa17e86bdaad73055c843d989521bed20d1d6796b2171b7a namespace=k8s.io Nov 1 00:46:11.666019 env[1313]: time="2025-11-01T00:46:11.666017393Z" level=info msg="cleaning up dead shim" Nov 1 00:46:11.681097 env[1313]: time="2025-11-01T00:46:11.679467215Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:46:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\ntime=\"2025-11-01T00:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Nov 1 00:46:11.712671 kubelet[2110]: I1101 00:46:11.712626 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85145623-0c17-44c2-88eb-320f9ee94755-whisker-ca-bundle\") pod \"whisker-8977d49c-c9phz\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " pod="calico-system/whisker-8977d49c-c9phz" Nov 1 00:46:11.712671 kubelet[2110]: I1101 00:46:11.712675 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85145623-0c17-44c2-88eb-320f9ee94755-whisker-backend-key-pair\") pod \"whisker-8977d49c-c9phz\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " pod="calico-system/whisker-8977d49c-c9phz" Nov 1 00:46:11.712671 kubelet[2110]: I1101 00:46:11.712702 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254jl\" (UniqueName: \"kubernetes.io/projected/6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67-kube-api-access-254jl\") pod \"coredns-668d6bf9bc-jfbds\" (UID: \"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67\") " pod="kube-system/coredns-668d6bf9bc-jfbds" Nov 1 00:46:11.713088 kubelet[2110]: I1101 00:46:11.712723 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe081ef-ff27-4230-8865-b572345e2224-tigera-ca-bundle\") pod \"calico-kube-controllers-5dcb6947d5-ljzpr\" (UID: \"6fe081ef-ff27-4230-8865-b572345e2224\") " pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" Nov 1 00:46:11.713088 kubelet[2110]: I1101 00:46:11.712744 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mfw7\" (UniqueName: \"kubernetes.io/projected/6fe081ef-ff27-4230-8865-b572345e2224-kube-api-access-4mfw7\") pod \"calico-kube-controllers-5dcb6947d5-ljzpr\" (UID: \"6fe081ef-ff27-4230-8865-b572345e2224\") " pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" Nov 1 00:46:11.713088 kubelet[2110]: I1101 00:46:11.712766 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4n2n\" (UniqueName: \"kubernetes.io/projected/ffdf82e5-9850-41df-9576-1cf8a00ef8fd-kube-api-access-n4n2n\") pod \"calico-apiserver-7c858d548c-69qzg\" (UID: \"ffdf82e5-9850-41df-9576-1cf8a00ef8fd\") " pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" Nov 1 00:46:11.713088 kubelet[2110]: I1101 00:46:11.712786 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fbcbf90-e90d-4d2e-bb2c-68aa5206a338-goldmane-ca-bundle\") pod \"goldmane-666569f655-bbnwx\" (UID: \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\") " pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:11.713088 kubelet[2110]: I1101 00:46:11.712842 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fbcbf90-e90d-4d2e-bb2c-68aa5206a338-config\") pod \"goldmane-666569f655-bbnwx\" (UID: \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\") " pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:11.713542 kubelet[2110]: I1101 00:46:11.712869 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5fbcbf90-e90d-4d2e-bb2c-68aa5206a338-goldmane-key-pair\") pod \"goldmane-666569f655-bbnwx\" (UID: \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\") " pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:11.713542 kubelet[2110]: I1101 00:46:11.712927 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/704976ec-fdca-4788-bd96-1a541f0cf01c-calico-apiserver-certs\") pod \"calico-apiserver-7c858d548c-cmrkr\" (UID: \"704976ec-fdca-4788-bd96-1a541f0cf01c\") " pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" Nov 1 00:46:11.713542 kubelet[2110]: I1101 00:46:11.712994 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkh2w\" (UniqueName: \"kubernetes.io/projected/85145623-0c17-44c2-88eb-320f9ee94755-kube-api-access-kkh2w\") pod \"whisker-8977d49c-c9phz\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " pod="calico-system/whisker-8977d49c-c9phz" Nov 1 00:46:11.713542 kubelet[2110]: I1101 00:46:11.713020 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67-config-volume\") pod \"coredns-668d6bf9bc-jfbds\" (UID: \"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67\") " pod="kube-system/coredns-668d6bf9bc-jfbds" Nov 1 00:46:11.713542 kubelet[2110]: I1101 00:46:11.713044 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46898\" (UniqueName: \"kubernetes.io/projected/704976ec-fdca-4788-bd96-1a541f0cf01c-kube-api-access-46898\") pod \"calico-apiserver-7c858d548c-cmrkr\" (UID: \"704976ec-fdca-4788-bd96-1a541f0cf01c\") " pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" Nov 1 00:46:11.713747 kubelet[2110]: I1101 00:46:11.713093 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c-config-volume\") pod \"coredns-668d6bf9bc-s979b\" (UID: \"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c\") " pod="kube-system/coredns-668d6bf9bc-s979b" Nov 1 00:46:11.713747 kubelet[2110]: I1101 00:46:11.713126 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxt8q\" (UniqueName: \"kubernetes.io/projected/6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c-kube-api-access-lxt8q\") pod \"coredns-668d6bf9bc-s979b\" (UID: \"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c\") " pod="kube-system/coredns-668d6bf9bc-s979b" Nov 1 00:46:11.713747 kubelet[2110]: I1101 00:46:11.713152 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ffdf82e5-9850-41df-9576-1cf8a00ef8fd-calico-apiserver-certs\") pod \"calico-apiserver-7c858d548c-69qzg\" (UID: \"ffdf82e5-9850-41df-9576-1cf8a00ef8fd\") " pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" Nov 1 00:46:11.713747 kubelet[2110]: I1101 00:46:11.713174 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrxgv\" (UniqueName: \"kubernetes.io/projected/5fbcbf90-e90d-4d2e-bb2c-68aa5206a338-kube-api-access-lrxgv\") pod \"goldmane-666569f655-bbnwx\" (UID: \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\") " pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:11.845787 kubelet[2110]: E1101 00:46:11.845724 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:11.846869 env[1313]: time="2025-11-01T00:46:11.846798524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:46:11.970972 env[1313]: time="2025-11-01T00:46:11.970912746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8977d49c-c9phz,Uid:85145623-0c17-44c2-88eb-320f9ee94755,Namespace:calico-system,Attempt:0,}" Nov 1 00:46:11.976326 kubelet[2110]: E1101 00:46:11.976267 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:11.976884 env[1313]: time="2025-11-01T00:46:11.976800077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s979b,Uid:6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c,Namespace:kube-system,Attempt:0,}" Nov 1 00:46:11.979448 env[1313]: time="2025-11-01T00:46:11.979400878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-69qzg,Uid:ffdf82e5-9850-41df-9576-1cf8a00ef8fd,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:46:11.983843 kubelet[2110]: E1101 00:46:11.983783 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:11.984151 env[1313]: time="2025-11-01T00:46:11.984105715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bbnwx,Uid:5fbcbf90-e90d-4d2e-bb2c-68aa5206a338,Namespace:calico-system,Attempt:0,}" Nov 1 00:46:11.984360 env[1313]: time="2025-11-01T00:46:11.984232964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfbds,Uid:6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67,Namespace:kube-system,Attempt:0,}" Nov 1 00:46:11.988022 env[1313]: time="2025-11-01T00:46:11.987992053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcb6947d5-ljzpr,Uid:6fe081ef-ff27-4230-8865-b572345e2224,Namespace:calico-system,Attempt:0,}" Nov 1 00:46:11.989476 env[1313]: time="2025-11-01T00:46:11.989439765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-cmrkr,Uid:704976ec-fdca-4788-bd96-1a541f0cf01c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:46:12.274285 env[1313]: time="2025-11-01T00:46:12.274152599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zk5w7,Uid:323323dc-c361-4116-a022-8e5f45430869,Namespace:calico-system,Attempt:0,}" Nov 1 00:46:12.360587 env[1313]: time="2025-11-01T00:46:12.360469116Z" level=error msg="Failed to destroy network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.361209 env[1313]: time="2025-11-01T00:46:12.361170826Z" level=error msg="encountered an error cleaning up failed sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.361401 env[1313]: time="2025-11-01T00:46:12.361359770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcb6947d5-ljzpr,Uid:6fe081ef-ff27-4230-8865-b572345e2224,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.362426 kubelet[2110]: E1101 00:46:12.361785 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.362426 kubelet[2110]: E1101 00:46:12.361922 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" Nov 1 00:46:12.362426 kubelet[2110]: E1101 00:46:12.361950 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" Nov 1 00:46:12.362597 kubelet[2110]: E1101 00:46:12.362025 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dcb6947d5-ljzpr_calico-system(6fe081ef-ff27-4230-8865-b572345e2224)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dcb6947d5-ljzpr_calico-system(6fe081ef-ff27-4230-8865-b572345e2224)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:12.374484 env[1313]: time="2025-11-01T00:46:12.374403274Z" level=error msg="Failed to destroy network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.375062 env[1313]: time="2025-11-01T00:46:12.375028640Z" level=error msg="encountered an error cleaning up failed sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.375248 env[1313]: time="2025-11-01T00:46:12.375216143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s979b,Uid:6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.375746 kubelet[2110]: E1101 00:46:12.375695 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.375851 kubelet[2110]: E1101 00:46:12.375777 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s979b" Nov 1 00:46:12.375851 kubelet[2110]: E1101 00:46:12.375799 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s979b" Nov 1 00:46:12.375930 kubelet[2110]: E1101 00:46:12.375853 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s979b_kube-system(6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s979b_kube-system(6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s979b" podUID="6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c" Nov 1 00:46:12.376167 env[1313]: time="2025-11-01T00:46:12.376133728Z" level=error msg="Failed to destroy network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.376672 env[1313]: time="2025-11-01T00:46:12.376639689Z" level=error msg="encountered an error cleaning up failed sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.376804 env[1313]: time="2025-11-01T00:46:12.376764774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-69qzg,Uid:ffdf82e5-9850-41df-9576-1cf8a00ef8fd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.377251 kubelet[2110]: E1101 00:46:12.377059 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.377251 kubelet[2110]: E1101 00:46:12.377119 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" Nov 1 00:46:12.377251 kubelet[2110]: E1101 00:46:12.377138 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" Nov 1 00:46:12.377500 kubelet[2110]: E1101 00:46:12.377196 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c858d548c-69qzg_calico-apiserver(ffdf82e5-9850-41df-9576-1cf8a00ef8fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c858d548c-69qzg_calico-apiserver(ffdf82e5-9850-41df-9576-1cf8a00ef8fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:12.398566 env[1313]: time="2025-11-01T00:46:12.398488798Z" level=error msg="Failed to destroy network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.398774 env[1313]: time="2025-11-01T00:46:12.398488989Z" level=error msg="Failed to destroy network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.399169 env[1313]: time="2025-11-01T00:46:12.399130215Z" level=error msg="encountered an error cleaning up failed sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.399240 env[1313]: time="2025-11-01T00:46:12.399192091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-cmrkr,Uid:704976ec-fdca-4788-bd96-1a541f0cf01c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.399519 env[1313]: time="2025-11-01T00:46:12.399482938Z" level=error msg="encountered an error cleaning up failed sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.399603 kubelet[2110]: E1101 00:46:12.399466 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.399603 kubelet[2110]: E1101 00:46:12.399566 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" Nov 1 00:46:12.399712 kubelet[2110]: E1101 00:46:12.399605 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" Nov 1 00:46:12.399712 kubelet[2110]: E1101 00:46:12.399681 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c858d548c-cmrkr_calico-apiserver(704976ec-fdca-4788-bd96-1a541f0cf01c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c858d548c-cmrkr_calico-apiserver(704976ec-fdca-4788-bd96-1a541f0cf01c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:12.399950 env[1313]: time="2025-11-01T00:46:12.399899711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfbds,Uid:6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.400197 env[1313]: time="2025-11-01T00:46:12.400169428Z" level=error msg="Failed to destroy network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.400586 env[1313]: time="2025-11-01T00:46:12.400556295Z" level=error msg="encountered an error cleaning up failed sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.400726 env[1313]: time="2025-11-01T00:46:12.400690017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bbnwx,Uid:5fbcbf90-e90d-4d2e-bb2c-68aa5206a338,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.401499 kubelet[2110]: E1101 00:46:12.401469 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.401570 kubelet[2110]: E1101 00:46:12.401508 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:12.401570 kubelet[2110]: E1101 00:46:12.401471 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.401570 kubelet[2110]: E1101 00:46:12.401547 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jfbds" Nov 1 00:46:12.401683 kubelet[2110]: E1101 00:46:12.401581 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jfbds" Nov 1 00:46:12.401683 kubelet[2110]: E1101 00:46:12.401521 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-bbnwx" Nov 1 00:46:12.401683 kubelet[2110]: E1101 00:46:12.401617 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-bbnwx_calico-system(5fbcbf90-e90d-4d2e-bb2c-68aa5206a338)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-bbnwx_calico-system(5fbcbf90-e90d-4d2e-bb2c-68aa5206a338)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:12.401837 kubelet[2110]: E1101 00:46:12.401618 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jfbds_kube-system(6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jfbds_kube-system(6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jfbds" podUID="6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67" Nov 1 00:46:12.404461 env[1313]: time="2025-11-01T00:46:12.404407787Z" level=error msg="Failed to destroy network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.404858 env[1313]: time="2025-11-01T00:46:12.404800426Z" level=error msg="encountered an error cleaning up failed sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.404919 env[1313]: time="2025-11-01T00:46:12.404876749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8977d49c-c9phz,Uid:85145623-0c17-44c2-88eb-320f9ee94755,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.405108 kubelet[2110]: E1101 00:46:12.405067 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.405184 kubelet[2110]: E1101 00:46:12.405128 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8977d49c-c9phz" Nov 1 00:46:12.405184 kubelet[2110]: E1101 00:46:12.405156 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8977d49c-c9phz" Nov 1 00:46:12.405257 kubelet[2110]: E1101 00:46:12.405204 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8977d49c-c9phz_calico-system(85145623-0c17-44c2-88eb-320f9ee94755)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8977d49c-c9phz_calico-system(85145623-0c17-44c2-88eb-320f9ee94755)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8977d49c-c9phz" podUID="85145623-0c17-44c2-88eb-320f9ee94755" Nov 1 00:46:12.405958 env[1313]: time="2025-11-01T00:46:12.405906896Z" level=error msg="Failed to destroy network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.406269 env[1313]: time="2025-11-01T00:46:12.406234933Z" level=error msg="encountered an error cleaning up failed sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.406433 env[1313]: time="2025-11-01T00:46:12.406395264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zk5w7,Uid:323323dc-c361-4116-a022-8e5f45430869,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.406696 kubelet[2110]: E1101 00:46:12.406650 2110 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.406762 kubelet[2110]: E1101 00:46:12.406702 2110 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:46:12.406762 kubelet[2110]: E1101 00:46:12.406728 2110 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zk5w7" Nov 1 00:46:12.406860 kubelet[2110]: E1101 00:46:12.406772 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:12.847986 kubelet[2110]: I1101 00:46:12.847924 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:12.848849 env[1313]: time="2025-11-01T00:46:12.848786492Z" level=info msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" Nov 1 00:46:12.849329 kubelet[2110]: I1101 00:46:12.849165 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:12.849622 env[1313]: time="2025-11-01T00:46:12.849574133Z" level=info msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" Nov 1 00:46:12.851220 kubelet[2110]: I1101 00:46:12.851186 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:12.852390 env[1313]: time="2025-11-01T00:46:12.852317621Z" level=info msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" Nov 1 00:46:12.853668 kubelet[2110]: I1101 00:46:12.853174 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:12.853919 env[1313]: time="2025-11-01T00:46:12.853876532Z" level=info msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" Nov 1 00:46:12.856507 kubelet[2110]: I1101 00:46:12.856105 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:12.856892 env[1313]: time="2025-11-01T00:46:12.856858989Z" level=info msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" Nov 1 00:46:12.858074 kubelet[2110]: I1101 00:46:12.857835 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:12.858210 env[1313]: time="2025-11-01T00:46:12.858173321Z" level=info msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" Nov 1 00:46:12.859283 kubelet[2110]: I1101 00:46:12.859260 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:12.859642 env[1313]: time="2025-11-01T00:46:12.859604241Z" level=info msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" Nov 1 00:46:12.860499 kubelet[2110]: I1101 00:46:12.860473 2110 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:12.860940 env[1313]: time="2025-11-01T00:46:12.860917039Z" level=info msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" Nov 1 00:46:12.898688 env[1313]: time="2025-11-01T00:46:12.898625326Z" level=error msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" failed" error="failed to destroy network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.899260 kubelet[2110]: E1101 00:46:12.899196 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:12.899422 kubelet[2110]: E1101 00:46:12.899303 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74"} Nov 1 00:46:12.899469 kubelet[2110]: E1101 00:46:12.899451 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85145623-0c17-44c2-88eb-320f9ee94755\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.899546 kubelet[2110]: E1101 00:46:12.899500 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85145623-0c17-44c2-88eb-320f9ee94755\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8977d49c-c9phz" podUID="85145623-0c17-44c2-88eb-320f9ee94755" Nov 1 00:46:12.902338 env[1313]: time="2025-11-01T00:46:12.902291158Z" level=error msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" failed" error="failed to destroy network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.902632 kubelet[2110]: E1101 00:46:12.902600 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:12.902714 kubelet[2110]: E1101 00:46:12.902641 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9"} Nov 1 00:46:12.902777 kubelet[2110]: E1101 00:46:12.902720 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fe081ef-ff27-4230-8865-b572345e2224\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.902777 kubelet[2110]: E1101 00:46:12.902752 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fe081ef-ff27-4230-8865-b572345e2224\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:12.908574 env[1313]: time="2025-11-01T00:46:12.908511293Z" level=error msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" failed" error="failed to destroy network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.909124 kubelet[2110]: E1101 00:46:12.908981 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:12.909124 kubelet[2110]: E1101 00:46:12.909028 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d"} Nov 1 00:46:12.909124 kubelet[2110]: E1101 00:46:12.909060 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"704976ec-fdca-4788-bd96-1a541f0cf01c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.909124 kubelet[2110]: E1101 00:46:12.909084 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"704976ec-fdca-4788-bd96-1a541f0cf01c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:12.927205 env[1313]: time="2025-11-01T00:46:12.927136390Z" level=error msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" failed" error="failed to destroy network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.927747 kubelet[2110]: E1101 00:46:12.927604 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:12.927747 kubelet[2110]: E1101 00:46:12.927653 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db"} Nov 1 00:46:12.927747 kubelet[2110]: E1101 00:46:12.927689 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.927747 kubelet[2110]: E1101 00:46:12.927713 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:12.928436 env[1313]: time="2025-11-01T00:46:12.928408953Z" level=error msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" failed" error="failed to destroy network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.928732 kubelet[2110]: E1101 00:46:12.928625 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:12.928732 kubelet[2110]: E1101 00:46:12.928654 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa"} Nov 1 00:46:12.928732 kubelet[2110]: E1101 00:46:12.928677 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.928732 kubelet[2110]: E1101 00:46:12.928697 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jfbds" podUID="6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67" Nov 1 00:46:12.930202 env[1313]: time="2025-11-01T00:46:12.930162050Z" level=error msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" failed" error="failed to destroy network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.930503 kubelet[2110]: E1101 00:46:12.930396 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:12.930503 kubelet[2110]: E1101 00:46:12.930429 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317"} Nov 1 00:46:12.930503 kubelet[2110]: E1101 00:46:12.930454 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.930503 kubelet[2110]: E1101 00:46:12.930473 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s979b" podUID="6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c" Nov 1 00:46:12.938330 env[1313]: time="2025-11-01T00:46:12.938271927Z" level=error msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" failed" error="failed to destroy network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.938619 kubelet[2110]: E1101 00:46:12.938541 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:12.938619 kubelet[2110]: E1101 00:46:12.938605 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8"} Nov 1 00:46:12.938732 kubelet[2110]: E1101 00:46:12.938648 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffdf82e5-9850-41df-9576-1cf8a00ef8fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.938732 kubelet[2110]: E1101 00:46:12.938673 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffdf82e5-9850-41df-9576-1cf8a00ef8fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:12.938867 env[1313]: time="2025-11-01T00:46:12.938628738Z" level=error msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" failed" error="failed to destroy network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:46:12.938903 kubelet[2110]: E1101 00:46:12.938860 2110 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:12.938903 kubelet[2110]: E1101 00:46:12.938889 2110 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca"} Nov 1 00:46:12.938955 kubelet[2110]: E1101 00:46:12.938912 2110 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"323323dc-c361-4116-a022-8e5f45430869\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:46:12.938955 kubelet[2110]: E1101 00:46:12.938931 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"323323dc-c361-4116-a022-8e5f45430869\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:20.299074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240127398.mount: Deactivated successfully. Nov 1 00:46:20.962042 env[1313]: time="2025-11-01T00:46:20.961918855Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:20.964960 env[1313]: time="2025-11-01T00:46:20.964905415Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:20.966589 env[1313]: time="2025-11-01T00:46:20.966562897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:20.968531 env[1313]: time="2025-11-01T00:46:20.968409837Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:46:20.969457 env[1313]: time="2025-11-01T00:46:20.969156108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:46:20.977534 env[1313]: time="2025-11-01T00:46:20.977479531Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:46:21.005407 env[1313]: time="2025-11-01T00:46:21.005320863Z" level=info msg="CreateContainer within sandbox \"812043426828709770c7b297e7d027138d00f075bad0600dc6963c1cac5515bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d31f05f3f32b68d0533eef02d0273f25ea3379e80983332b197602a71652c72b\"" Nov 1 00:46:21.006117 env[1313]: time="2025-11-01T00:46:21.006065982Z" level=info msg="StartContainer for \"d31f05f3f32b68d0533eef02d0273f25ea3379e80983332b197602a71652c72b\"" Nov 1 00:46:21.056819 env[1313]: time="2025-11-01T00:46:21.056743826Z" level=info msg="StartContainer for \"d31f05f3f32b68d0533eef02d0273f25ea3379e80983332b197602a71652c72b\" returns successfully" Nov 1 00:46:21.141976 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:46:21.142174 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:46:21.240983 env[1313]: time="2025-11-01T00:46:21.240824125Z" level=info msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.317 [INFO][3469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.317 [INFO][3469] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" iface="eth0" netns="/var/run/netns/cni-30d28065-42de-b38a-82e2-ba196e6f399d" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.318 [INFO][3469] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" iface="eth0" netns="/var/run/netns/cni-30d28065-42de-b38a-82e2-ba196e6f399d" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.318 [INFO][3469] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" iface="eth0" netns="/var/run/netns/cni-30d28065-42de-b38a-82e2-ba196e6f399d" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.318 [INFO][3469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.318 [INFO][3469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.390 [INFO][3479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.390 [INFO][3479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.391 [INFO][3479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.398 [WARNING][3479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.398 [INFO][3479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.400 [INFO][3479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:21.405062 env[1313]: 2025-11-01 00:46:21.403 [INFO][3469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:21.407926 env[1313]: time="2025-11-01T00:46:21.405222942Z" level=info msg="TearDown network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" successfully" Nov 1 00:46:21.407926 env[1313]: time="2025-11-01T00:46:21.405261464Z" level=info msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" returns successfully" Nov 1 00:46:21.407901 systemd[1]: run-netns-cni\x2d30d28065\x2d42de\x2db38a\x2d82e2\x2dba196e6f399d.mount: Deactivated successfully. Nov 1 00:46:21.539284 kubelet[2110]: I1101 00:46:21.538644 2110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85145623-0c17-44c2-88eb-320f9ee94755-whisker-ca-bundle\") pod \"85145623-0c17-44c2-88eb-320f9ee94755\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " Nov 1 00:46:21.539284 kubelet[2110]: I1101 00:46:21.538696 2110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85145623-0c17-44c2-88eb-320f9ee94755-whisker-backend-key-pair\") pod \"85145623-0c17-44c2-88eb-320f9ee94755\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " Nov 1 00:46:21.539284 kubelet[2110]: I1101 00:46:21.538716 2110 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkh2w\" (UniqueName: \"kubernetes.io/projected/85145623-0c17-44c2-88eb-320f9ee94755-kube-api-access-kkh2w\") pod \"85145623-0c17-44c2-88eb-320f9ee94755\" (UID: \"85145623-0c17-44c2-88eb-320f9ee94755\") " Nov 1 00:46:21.541932 kubelet[2110]: I1101 00:46:21.541884 2110 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85145623-0c17-44c2-88eb-320f9ee94755-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "85145623-0c17-44c2-88eb-320f9ee94755" (UID: "85145623-0c17-44c2-88eb-320f9ee94755"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:46:21.564559 kubelet[2110]: I1101 00:46:21.564451 2110 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85145623-0c17-44c2-88eb-320f9ee94755-kube-api-access-kkh2w" (OuterVolumeSpecName: "kube-api-access-kkh2w") pod "85145623-0c17-44c2-88eb-320f9ee94755" (UID: "85145623-0c17-44c2-88eb-320f9ee94755"). InnerVolumeSpecName "kube-api-access-kkh2w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:46:21.565394 kubelet[2110]: I1101 00:46:21.565299 2110 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85145623-0c17-44c2-88eb-320f9ee94755-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "85145623-0c17-44c2-88eb-320f9ee94755" (UID: "85145623-0c17-44c2-88eb-320f9ee94755"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:46:21.566384 systemd[1]: var-lib-kubelet-pods-85145623\x2d0c17\x2d44c2\x2d88eb\x2d320f9ee94755-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkh2w.mount: Deactivated successfully. Nov 1 00:46:21.566534 systemd[1]: var-lib-kubelet-pods-85145623\x2d0c17\x2d44c2\x2d88eb\x2d320f9ee94755-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:46:21.639920 kubelet[2110]: I1101 00:46:21.639843 2110 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kkh2w\" (UniqueName: \"kubernetes.io/projected/85145623-0c17-44c2-88eb-320f9ee94755-kube-api-access-kkh2w\") on node \"localhost\" DevicePath \"\"" Nov 1 00:46:21.639920 kubelet[2110]: I1101 00:46:21.639898 2110 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/85145623-0c17-44c2-88eb-320f9ee94755-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:46:21.639920 kubelet[2110]: I1101 00:46:21.639916 2110 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85145623-0c17-44c2-88eb-320f9ee94755-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:46:21.880888 kubelet[2110]: E1101 00:46:21.880380 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:22.271632 kubelet[2110]: I1101 00:46:22.271586 2110 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85145623-0c17-44c2-88eb-320f9ee94755" path="/var/lib/kubelet/pods/85145623-0c17-44c2-88eb-320f9ee94755/volumes" Nov 1 00:46:22.274131 kubelet[2110]: I1101 00:46:22.274080 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s7tp8" podStartSLOduration=2.415938538 podStartE2EDuration="25.274058267s" podCreationTimestamp="2025-11-01 00:45:57 +0000 UTC" firstStartedPulling="2025-11-01 00:45:58.111927923 +0000 UTC m=+21.971385713" lastFinishedPulling="2025-11-01 00:46:20.970047652 +0000 UTC m=+44.829505442" observedRunningTime="2025-11-01 00:46:22.258125407 +0000 UTC m=+46.117583197" watchObservedRunningTime="2025-11-01 00:46:22.274058267 +0000 UTC m=+46.133516067" Nov 1 00:46:22.345652 kubelet[2110]: I1101 00:46:22.345552 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60293e01-1e82-445c-9d51-cf8544191dce-whisker-ca-bundle\") pod \"whisker-547545f98f-bqwf6\" (UID: \"60293e01-1e82-445c-9d51-cf8544191dce\") " pod="calico-system/whisker-547545f98f-bqwf6" Nov 1 00:46:22.345652 kubelet[2110]: I1101 00:46:22.345625 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bvg\" (UniqueName: \"kubernetes.io/projected/60293e01-1e82-445c-9d51-cf8544191dce-kube-api-access-g4bvg\") pod \"whisker-547545f98f-bqwf6\" (UID: \"60293e01-1e82-445c-9d51-cf8544191dce\") " pod="calico-system/whisker-547545f98f-bqwf6" Nov 1 00:46:22.345652 kubelet[2110]: I1101 00:46:22.345665 2110 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60293e01-1e82-445c-9d51-cf8544191dce-whisker-backend-key-pair\") pod \"whisker-547545f98f-bqwf6\" (UID: \"60293e01-1e82-445c-9d51-cf8544191dce\") " pod="calico-system/whisker-547545f98f-bqwf6" Nov 1 00:46:22.455189 kernel: audit: type=1400 audit(1761957982.434:287): avc: denied { write } for pid=3560 comm="tee" name="fd" dev="proc" ino=26659 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.455426 kernel: audit: type=1300 audit(1761957982.434:287): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd1e987e5 a2=241 a3=1b6 items=1 ppid=3508 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.434000 audit[3560]: AVC avc: denied { write } for pid=3560 comm="tee" name="fd" dev="proc" ino=26659 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.434000 audit[3560]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd1e987e5 a2=241 a3=1b6 items=1 ppid=3508 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.434000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:46:22.480385 kernel: audit: type=1307 audit(1761957982.434:287): cwd="/etc/service/enabled/felix/log" Nov 1 00:46:22.434000 audit: PATH item=0 name="/dev/fd/63" inode=24202 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.505145 kernel: audit: type=1302 audit(1761957982.434:287): item=0 name="/dev/fd/63" inode=24202 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.505410 kernel: audit: type=1327 audit(1761957982.434:287): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.435000 audit[3552]: AVC avc: denied { write } for pid=3552 comm="tee" name="fd" dev="proc" ino=26663 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.435000 audit[3552]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6f2827e5 a2=241 a3=1b6 items=1 ppid=3510 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.528802 kernel: audit: type=1400 audit(1761957982.435:288): avc: denied { write } for pid=3552 comm="tee" name="fd" dev="proc" ino=26663 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.529025 kernel: audit: type=1300 audit(1761957982.435:288): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc6f2827e5 a2=241 a3=1b6 items=1 ppid=3510 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.533721 kernel: audit: type=1307 audit(1761957982.435:288): cwd="/etc/service/enabled/confd/log" Nov 1 00:46:22.435000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:46:22.435000 audit: PATH item=0 name="/dev/fd/63" inode=26653 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.544341 kernel: audit: type=1302 audit(1761957982.435:288): item=0 name="/dev/fd/63" inode=26653 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.435000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.450000 audit[3579]: AVC avc: denied { write } for pid=3579 comm="tee" name="fd" dev="proc" ino=25064 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.450000 audit[3579]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe228e27d5 a2=241 a3=1b6 items=1 ppid=3509 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.450000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:46:22.450000 audit: PATH item=0 name="/dev/fd/63" inode=25061 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.555442 kernel: audit: type=1327 audit(1761957982.435:288): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.450000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.450000 audit[3571]: AVC avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=25068 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.450000 audit[3571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc502e7e6 a2=241 a3=1b6 items=1 ppid=3517 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.450000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:46:22.450000 audit: PATH item=0 name="/dev/fd/63" inode=25057 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.450000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.451000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=25072 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.451000 audit[3574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe01287e7 a2=241 a3=1b6 items=1 ppid=3521 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.451000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:46:22.451000 audit: PATH item=0 name="/dev/fd/63" inode=25058 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.451000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.463000 audit[3558]: AVC avc: denied { write } for pid=3558 comm="tee" name="fd" dev="proc" ino=25076 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.463000 audit[3558]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedb9dd7d6 a2=241 a3=1b6 items=1 ppid=3516 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.463000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:46:22.463000 audit: PATH item=0 name="/dev/fd/63" inode=26656 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.463000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.467000 audit[3575]: AVC avc: denied { write } for pid=3575 comm="tee" name="fd" dev="proc" ino=24212 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:46:22.467000 audit[3575]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff74a7b7e5 a2=241 a3=1b6 items=1 ppid=3514 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.467000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:46:22.467000 audit: PATH item=0 name="/dev/fd/63" inode=26667 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:46:22.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:46:22.579650 env[1313]: time="2025-11-01T00:46:22.579576529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547545f98f-bqwf6,Uid:60293e01-1e82-445c-9d51-cf8544191dce,Namespace:calico-system,Attempt:0,}" Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.830000 audit: BPF prog-id=10 op=LOAD Nov 1 00:46:22.830000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc868d3db0 a2=98 a3=1fffffffffffffff items=0 ppid=3511 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.830000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:46:22.831000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.831000 audit: BPF prog-id=11 op=LOAD Nov 1 00:46:22.831000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc868d3c90 a2=94 a3=3 items=0 ppid=3511 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.831000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:46:22.832000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit[3613]: AVC avc: denied { bpf } for pid=3613 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.832000 audit: BPF prog-id=12 op=LOAD Nov 1 00:46:22.832000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc868d3cd0 a2=94 a3=7ffc868d3eb0 items=0 ppid=3511 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.832000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:46:22.839000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:46:22.839000 audit[3613]: AVC avc: denied { perfmon } for pid=3613 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.839000 audit[3613]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc868d3da0 a2=50 a3=a000000085 items=0 ppid=3511 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.839000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit: BPF prog-id=13 op=LOAD Nov 1 00:46:22.842000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2bd36ea0 a2=98 a3=3 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.842000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:22.842000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.842000 audit: BPF prog-id=14 op=LOAD Nov 1 00:46:22.842000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2bd36c90 a2=94 a3=54428f items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.842000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:22.843000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:22.843000 audit: BPF prog-id=15 op=LOAD Nov 1 00:46:22.843000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2bd36cc0 a2=94 a3=2 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:22.843000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:22.843000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:46:22.880503 kubelet[2110]: E1101 00:46:22.880468 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.025000 audit: BPF prog-id=16 op=LOAD Nov 1 00:46:23.025000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2bd36b80 a2=94 a3=1 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.025000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.026000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:46:23.026000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.026000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc2bd36c50 a2=50 a3=7ffc2bd36d30 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36b90 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2bd36bc0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2bd36ad0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36be0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36bc0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36bb0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36be0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2bd36bc0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2bd36be0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2bd36bb0 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.037000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.037000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2bd36c20 a2=28 a3=0 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.037000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2bd369d0 a2=50 a3=1 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit: BPF prog-id=17 op=LOAD Nov 1 00:46:23.038000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc2bd369d0 a2=94 a3=5 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.038000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2bd36a80 a2=50 a3=1 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc2bd36ba0 a2=4 a3=38 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.038000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.038000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2bd36bf0 a2=94 a3=6 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.038000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.039000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2bd363a0 a2=94 a3=88 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.039000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { perfmon } for pid=3614 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { bpf } for pid=3614 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.039000 audit[3614]: AVC avc: denied { confidentiality } for pid=3614 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.039000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2bd363a0 a2=94 a3=88 items=0 ppid=3511 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.039000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.048000 audit: BPF prog-id=18 op=LOAD Nov 1 00:46:23.048000 audit[3657]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc998f4130 a2=98 a3=1999999999999999 items=0 ppid=3511 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.048000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:46:23.049000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit: BPF prog-id=19 op=LOAD Nov 1 00:46:23.049000 audit[3657]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc998f4010 a2=94 a3=ffff items=0 ppid=3511 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.049000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:46:23.049000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { perfmon } for pid=3657 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit[3657]: AVC avc: denied { bpf } for pid=3657 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.049000 audit: BPF prog-id=20 op=LOAD Nov 1 00:46:23.049000 audit[3657]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc998f4050 a2=94 a3=7ffc998f4230 items=0 ppid=3511 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.049000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:46:23.049000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:46:23.272659 env[1313]: time="2025-11-01T00:46:23.272592689Z" level=info msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" Nov 1 00:46:23.287992 systemd-networkd[1077]: vxlan.calico: Link UP Nov 1 00:46:23.288007 systemd-networkd[1077]: vxlan.calico: Gained carrier Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.394000 audit: BPF prog-id=21 op=LOAD Nov 1 00:46:23.394000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe501c3530 a2=98 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.394000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.401000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.401000 audit: BPF prog-id=22 op=LOAD Nov 1 00:46:23.401000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe501c3340 a2=94 a3=54428f items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.401000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.404000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.404000 audit: BPF prog-id=23 op=LOAD Nov 1 00:46:23.404000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe501c3370 a2=94 a3=2 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.404000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.405000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:46:23.405000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.405000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c3240 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.405000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.406000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.406000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe501c3270 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.406000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.406000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.406000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe501c3180 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.406000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c3290 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c3270 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c3260 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c3290 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe501c3270 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe501c3290 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe501c3260 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffe501c32d0 a2=28 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.413000 audit: BPF prog-id=24 op=LOAD Nov 1 00:46:23.413000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe501c3140 a2=94 a3=0 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.413000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.422000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffe501c3130 a2=50 a3=2800 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.422000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffe501c3130 a2=50 a3=2800 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.422000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.422000 audit: BPF prog-id=25 op=LOAD Nov 1 00:46:23.422000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe501c2950 a2=94 a3=2 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.422000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.443000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { perfmon } for pid=3701 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit[3701]: AVC avc: denied { bpf } for pid=3701 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.444000 audit: BPF prog-id=26 op=LOAD Nov 1 00:46:23.444000 audit[3701]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe501c2a50 a2=94 a3=30 items=0 ppid=3511 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.444000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.451000 audit: BPF prog-id=27 op=LOAD Nov 1 00:46:23.451000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd9a51de40 a2=98 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.451000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.453000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit: BPF prog-id=28 op=LOAD Nov 1 00:46:23.453000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a51dc30 a2=94 a3=54428f items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.453000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.453000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.453000 audit: BPF prog-id=29 op=LOAD Nov 1 00:46:23.453000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a51dc60 a2=94 a3=2 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.453000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.454000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.467 [INFO][3686] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.469 [INFO][3686] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" iface="eth0" netns="/var/run/netns/cni-b31ab6a9-1f17-3028-29dc-14de3861c511" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.469 [INFO][3686] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" iface="eth0" netns="/var/run/netns/cni-b31ab6a9-1f17-3028-29dc-14de3861c511" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.470 [INFO][3686] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" iface="eth0" netns="/var/run/netns/cni-b31ab6a9-1f17-3028-29dc-14de3861c511" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.470 [INFO][3686] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.470 [INFO][3686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.543 [INFO][3716] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.547 [INFO][3716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.547 [INFO][3716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.561 [WARNING][3716] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.562 [INFO][3716] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.565 [INFO][3716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:23.572586 env[1313]: 2025-11-01 00:46:23.569 [INFO][3686] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:23.575997 systemd[1]: run-netns-cni\x2db31ab6a9\x2d1f17\x2d3028\x2d29dc\x2d14de3861c511.mount: Deactivated successfully. Nov 1 00:46:23.578609 env[1313]: time="2025-11-01T00:46:23.576138730Z" level=info msg="TearDown network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" successfully" Nov 1 00:46:23.578609 env[1313]: time="2025-11-01T00:46:23.576294281Z" level=info msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" returns successfully" Nov 1 00:46:23.578609 env[1313]: time="2025-11-01T00:46:23.577453077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-69qzg,Uid:ffdf82e5-9850-41df-9576-1cf8a00ef8fd,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.663000 audit: BPF prog-id=30 op=LOAD Nov 1 00:46:23.663000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd9a51db20 a2=94 a3=1 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.663000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.664000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:46:23.664000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.664000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd9a51dbf0 a2=50 a3=7ffd9a51dcd0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.664000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51db30 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9a51db60 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9a51da70 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51db80 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51db60 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51db50 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51db80 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9a51db60 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9a51db80 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd9a51db50 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.676000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.676000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd9a51dbc0 a2=28 a3=0 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.676000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd9a51d970 a2=50 a3=1 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit: BPF prog-id=31 op=LOAD Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd9a51d970 a2=94 a3=5 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd9a51da20 a2=50 a3=1 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd9a51db40 a2=4 a3=38 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { confidentiality } for pid=3714 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd9a51db90 a2=94 a3=6 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { confidentiality } for pid=3714 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd9a51d340 a2=94 a3=88 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { perfmon } for pid=3714 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.677000 audit[3714]: AVC avc: denied { confidentiality } for pid=3714 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:46:23.677000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd9a51d340 a2=94 a3=88 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.677000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.678000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.678000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd9a51ed70 a2=10 a3=208 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.678000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.678000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd9a51ec10 a2=10 a3=3 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.678000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.678000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd9a51ebb0 a2=10 a3=3 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.678000 audit[3714]: AVC avc: denied { bpf } for pid=3714 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:46:23.678000 audit[3714]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd9a51ebb0 a2=10 a3=7 items=0 ppid=3511 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.678000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:46:23.690000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:46:23.760990 systemd-networkd[1077]: cali8d1afc48e0d: Link UP Nov 1 00:46:23.823508 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8d1afc48e0d: link becomes ready Nov 1 00:46:23.833896 systemd-networkd[1077]: cali8d1afc48e0d: Gained carrier Nov 1 00:46:23.884325 kubelet[2110]: E1101 00:46:23.883807 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.561 [INFO][3717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--547545f98f--bqwf6-eth0 whisker-547545f98f- calico-system 60293e01-1e82-445c-9d51-cf8544191dce 923 0 2025-11-01 00:46:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:547545f98f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-547545f98f-bqwf6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8d1afc48e0d [] [] }} ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.562 [INFO][3717] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.622 [INFO][3736] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" HandleID="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Workload="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.622 [INFO][3736] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" HandleID="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Workload="localhost-k8s-whisker--547545f98f--bqwf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a43a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-547545f98f-bqwf6", "timestamp":"2025-11-01 00:46:23.622575555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.623 [INFO][3736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.623 [INFO][3736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.623 [INFO][3736] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.649 [INFO][3736] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.676 [INFO][3736] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.691 [INFO][3736] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.697 [INFO][3736] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.701 [INFO][3736] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.701 [INFO][3736] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.704 [INFO][3736] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.724 [INFO][3736] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.746 [INFO][3736] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.746 [INFO][3736] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" host="localhost" Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.746 [INFO][3736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:23.924362 env[1313]: 2025-11-01 00:46:23.746 [INFO][3736] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" HandleID="k8s-pod-network.9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Workload="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.751 [INFO][3717] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--547545f98f--bqwf6-eth0", GenerateName:"whisker-547545f98f-", Namespace:"calico-system", SelfLink:"", UID:"60293e01-1e82-445c-9d51-cf8544191dce", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547545f98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-547545f98f-bqwf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d1afc48e0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.755 [INFO][3717] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.756 [INFO][3717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d1afc48e0d ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.846 [INFO][3717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.854 [INFO][3717] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--547545f98f--bqwf6-eth0", GenerateName:"whisker-547545f98f-", Namespace:"calico-system", SelfLink:"", UID:"60293e01-1e82-445c-9d51-cf8544191dce", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 46, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"547545f98f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef", Pod:"whisker-547545f98f-bqwf6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8d1afc48e0d", MAC:"42:44:2e:39:49:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:23.925448 env[1313]: 2025-11-01 00:46:23.916 [INFO][3717] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef" Namespace="calico-system" Pod="whisker-547545f98f-bqwf6" WorkloadEndpoint="localhost-k8s-whisker--547545f98f--bqwf6-eth0" Nov 1 00:46:23.938000 audit[3805]: NETFILTER_CFG table=mangle:109 family=2 entries=16 op=nft_register_chain pid=3805 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:23.938000 audit[3805]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd3a829450 a2=0 a3=7ffd3a82943c items=0 ppid=3511 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.938000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:23.961000 audit[3803]: NETFILTER_CFG table=nat:110 family=2 entries=15 op=nft_register_chain pid=3803 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:23.961000 audit[3803]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd3586d4c0 a2=0 a3=7ffd3586d4ac items=0 ppid=3511 pid=3803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.961000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:23.979730 env[1313]: time="2025-11-01T00:46:23.979627755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:23.980000 env[1313]: time="2025-11-01T00:46:23.979971881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:23.980128 env[1313]: time="2025-11-01T00:46:23.980101424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:23.980464 env[1313]: time="2025-11-01T00:46:23.980421696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef pid=3826 runtime=io.containerd.runc.v2 Nov 1 00:46:23.984000 audit[3804]: NETFILTER_CFG table=raw:111 family=2 entries=21 op=nft_register_chain pid=3804 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:23.984000 audit[3804]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffeaa7a4f30 a2=0 a3=7ffeaa7a4f1c items=0 ppid=3511 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.984000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:23.998000 audit[3814]: NETFILTER_CFG table=filter:112 family=2 entries=39 op=nft_register_chain pid=3814 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:23.998000 audit[3814]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffee3432ad0 a2=0 a3=7ffee3432abc items=0 ppid=3511 pid=3814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:23.998000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:24.016765 systemd-networkd[1077]: calib4b22c13b23: Link UP Nov 1 00:46:24.020379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib4b22c13b23: link becomes ready Nov 1 00:46:24.020591 systemd-networkd[1077]: calib4b22c13b23: Gained carrier Nov 1 00:46:24.034864 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.695 [INFO][3743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0 calico-apiserver-7c858d548c- calico-apiserver ffdf82e5-9850-41df-9576-1cf8a00ef8fd 929 0 2025-11-01 00:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c858d548c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c858d548c-69qzg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib4b22c13b23 [] [] }} ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.696 [INFO][3743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.872 [INFO][3765] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" HandleID="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.872 [INFO][3765] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" HandleID="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c858d548c-69qzg", "timestamp":"2025-11-01 00:46:23.87254822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.872 [INFO][3765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.872 [INFO][3765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.872 [INFO][3765] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.902 [INFO][3765] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.930 [INFO][3765] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.963 [INFO][3765] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.968 [INFO][3765] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.977 [INFO][3765] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.977 [INFO][3765] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.981 [INFO][3765] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:23.995 [INFO][3765] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:24.010 [INFO][3765] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:24.010 [INFO][3765] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" host="localhost" Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:24.011 [INFO][3765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:24.073623 env[1313]: 2025-11-01 00:46:24.011 [INFO][3765] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" HandleID="k8s-pod-network.a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.015 [INFO][3743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffdf82e5-9850-41df-9576-1cf8a00ef8fd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c858d548c-69qzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4b22c13b23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.015 [INFO][3743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.015 [INFO][3743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4b22c13b23 ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.016 [INFO][3743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.017 [INFO][3743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffdf82e5-9850-41df-9576-1cf8a00ef8fd", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f", Pod:"calico-apiserver-7c858d548c-69qzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4b22c13b23", MAC:"06:72:33:6c:d9:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.076051 env[1313]: 2025-11-01 00:46:24.052 [INFO][3743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-69qzg" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:24.103862 env[1313]: time="2025-11-01T00:46:24.103181397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-547545f98f-bqwf6,Uid:60293e01-1e82-445c-9d51-cf8544191dce,Namespace:calico-system,Attempt:0,} returns sandbox id \"9582d00d2037a7ab9d33a09d055c8097f86ef6e651ad6fcc10fa71819f110fef\"" Nov 1 00:46:24.105812 env[1313]: time="2025-11-01T00:46:24.105156306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:46:24.138613 env[1313]: time="2025-11-01T00:46:24.136232463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:24.138613 env[1313]: time="2025-11-01T00:46:24.136355985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:24.138613 env[1313]: time="2025-11-01T00:46:24.136389388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:24.138613 env[1313]: time="2025-11-01T00:46:24.136613849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f pid=3883 runtime=io.containerd.runc.v2 Nov 1 00:46:24.084000 audit[3860]: NETFILTER_CFG table=filter:113 family=2 entries=59 op=nft_register_chain pid=3860 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:24.084000 audit[3860]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7fffb1419480 a2=0 a3=7fffb141946c items=0 ppid=3511 pid=3860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.084000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:24.179000 audit[3906]: NETFILTER_CFG table=filter:114 family=2 entries=50 op=nft_register_chain pid=3906 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:24.179000 audit[3906]: SYSCALL arch=c000003e syscall=46 success=yes exit=28208 a0=3 a1=7ffed6cfe5a0 a2=0 a3=7ffed6cfe58c items=0 ppid=3511 pid=3906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.179000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:24.194286 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:24.274883 env[1313]: time="2025-11-01T00:46:24.270778332Z" level=info msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" Nov 1 00:46:24.275416 env[1313]: time="2025-11-01T00:46:24.275379360Z" level=info msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" Nov 1 00:46:24.312704 env[1313]: time="2025-11-01T00:46:24.312652212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-69qzg,Uid:ffdf82e5-9850-41df-9576-1cf8a00ef8fd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f\"" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.381 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.382 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" iface="eth0" netns="/var/run/netns/cni-71800240-8033-ca4d-2f31-c5ffa4fb4482" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.382 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" iface="eth0" netns="/var/run/netns/cni-71800240-8033-ca4d-2f31-c5ffa4fb4482" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.383 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" iface="eth0" netns="/var/run/netns/cni-71800240-8033-ca4d-2f31-c5ffa4fb4482" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.383 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.383 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.422 [INFO][3957] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.422 [INFO][3957] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.423 [INFO][3957] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.431 [WARNING][3957] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.432 [INFO][3957] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.437 [INFO][3957] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:24.445008 env[1313]: 2025-11-01 00:46:24.439 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:24.449095 env[1313]: time="2025-11-01T00:46:24.449055278Z" level=info msg="TearDown network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" successfully" Nov 1 00:46:24.449223 env[1313]: time="2025-11-01T00:46:24.449200310Z" level=info msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" returns successfully" Nov 1 00:46:24.449875 kubelet[2110]: E1101 00:46:24.449689 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:24.450995 env[1313]: time="2025-11-01T00:46:24.450972598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s979b,Uid:6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c,Namespace:kube-system,Attempt:1,}" Nov 1 00:46:24.478226 env[1313]: time="2025-11-01T00:46:24.477663341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:24.487869 systemd[1]: run-netns-cni\x2d71800240\x2d8033\x2dca4d\x2d2f31\x2dc5ffa4fb4482.mount: Deactivated successfully. Nov 1 00:46:24.493128 env[1313]: time="2025-11-01T00:46:24.492579838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:46:24.497788 kubelet[2110]: E1101 00:46:24.497586 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:24.498637 kubelet[2110]: E1101 00:46:24.497986 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:24.498736 kubelet[2110]: E1101 00:46:24.498321 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fa950cf830a346e59fecb654697ba8aa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:24.501766 env[1313]: time="2025-11-01T00:46:24.501676527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.407 [INFO][3947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.407 [INFO][3947] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" iface="eth0" netns="/var/run/netns/cni-2839aaf4-7d8b-a9e8-3f35-3167e13e0d80" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.407 [INFO][3947] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" iface="eth0" netns="/var/run/netns/cni-2839aaf4-7d8b-a9e8-3f35-3167e13e0d80" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.408 [INFO][3947] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" iface="eth0" netns="/var/run/netns/cni-2839aaf4-7d8b-a9e8-3f35-3167e13e0d80" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.408 [INFO][3947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.408 [INFO][3947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.445 [INFO][3965] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.445 [INFO][3965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.445 [INFO][3965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.474 [WARNING][3965] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.474 [INFO][3965] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.501 [INFO][3965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:24.514611 env[1313]: 2025-11-01 00:46:24.512 [INFO][3947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:24.518485 systemd[1]: run-netns-cni\x2d2839aaf4\x2d7d8b\x2da9e8\x2d3f35\x2d3167e13e0d80.mount: Deactivated successfully. Nov 1 00:46:24.519438 env[1313]: time="2025-11-01T00:46:24.519308843Z" level=info msg="TearDown network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" successfully" Nov 1 00:46:24.519559 env[1313]: time="2025-11-01T00:46:24.519534416Z" level=info msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" returns successfully" Nov 1 00:46:24.520389 env[1313]: time="2025-11-01T00:46:24.520362852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-cmrkr,Uid:704976ec-fdca-4788-bd96-1a541f0cf01c,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:46:24.721463 systemd-networkd[1077]: cali84a508d0077: Link UP Nov 1 00:46:24.728619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:46:24.728746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali84a508d0077: link becomes ready Nov 1 00:46:24.738245 systemd-networkd[1077]: cali84a508d0077: Gained carrier Nov 1 00:46:24.738459 systemd-networkd[1077]: vxlan.calico: Gained IPv6LL Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.565 [INFO][3976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s979b-eth0 coredns-668d6bf9bc- kube-system 6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c 943 0 2025-11-01 00:45:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s979b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali84a508d0077 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.566 [INFO][3976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.621 [INFO][4003] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" HandleID="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.621 [INFO][4003] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" HandleID="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000130da0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s979b", "timestamp":"2025-11-01 00:46:24.621486517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.621 [INFO][4003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.622 [INFO][4003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.622 [INFO][4003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.635 [INFO][4003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.655 [INFO][4003] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.671 [INFO][4003] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.677 [INFO][4003] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.683 [INFO][4003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.683 [INFO][4003] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.688 [INFO][4003] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62 Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.695 [INFO][4003] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.709 [INFO][4003] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.709 [INFO][4003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" host="localhost" Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.709 [INFO][4003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:24.766620 env[1313]: 2025-11-01 00:46:24.709 [INFO][4003] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" HandleID="k8s-pod-network.2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.716 [INFO][3976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s979b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s979b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a508d0077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.716 [INFO][3976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.716 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84a508d0077 ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.724 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.732 [INFO][3976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s979b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62", Pod:"coredns-668d6bf9bc-s979b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a508d0077", MAC:"c2:e6:a8:3e:43:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.769098 env[1313]: 2025-11-01 00:46:24.760 [INFO][3976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62" Namespace="kube-system" Pod="coredns-668d6bf9bc-s979b" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:24.790481 env[1313]: time="2025-11-01T00:46:24.790342004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:24.793754 env[1313]: time="2025-11-01T00:46:24.793660726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:24.793931 env[1313]: time="2025-11-01T00:46:24.793903661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:24.794356 env[1313]: time="2025-11-01T00:46:24.794316416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62 pid=4037 runtime=io.containerd.runc.v2 Nov 1 00:46:24.806000 audit[4053]: NETFILTER_CFG table=filter:115 family=2 entries=52 op=nft_register_chain pid=4053 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:24.806000 audit[4053]: SYSCALL arch=c000003e syscall=46 success=yes exit=26592 a0=3 a1=7ffd03ac8e10 a2=0 a3=7ffd03ac8dfc items=0 ppid=3511 pid=4053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.806000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:24.832096 systemd-networkd[1077]: calia48937da2b4: Link UP Nov 1 00:46:24.836845 systemd-networkd[1077]: calia48937da2b4: Gained carrier Nov 1 00:46:24.837465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia48937da2b4: link becomes ready Nov 1 00:46:24.847228 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:24.854430 env[1313]: time="2025-11-01T00:46:24.854371604Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.634 [INFO][3991] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0 calico-apiserver-7c858d548c- calico-apiserver 704976ec-fdca-4788-bd96-1a541f0cf01c 944 0 2025-11-01 00:45:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c858d548c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c858d548c-cmrkr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia48937da2b4 [] [] }} ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.635 [INFO][3991] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.682 [INFO][4013] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" HandleID="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.682 [INFO][4013] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" HandleID="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e440), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c858d548c-cmrkr", "timestamp":"2025-11-01 00:46:24.68256704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.683 [INFO][4013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.709 [INFO][4013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.710 [INFO][4013] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.739 [INFO][4013] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.752 [INFO][4013] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.769 [INFO][4013] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.779 [INFO][4013] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.789 [INFO][4013] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.789 [INFO][4013] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.791 [INFO][4013] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.799 [INFO][4013] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.820 [INFO][4013] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.820 [INFO][4013] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" host="localhost" Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.821 [INFO][4013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:24.857934 env[1313]: 2025-11-01 00:46:24.821 [INFO][4013] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" HandleID="k8s-pod-network.0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.828 [INFO][3991] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"704976ec-fdca-4788-bd96-1a541f0cf01c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c858d548c-cmrkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia48937da2b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.828 [INFO][3991] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.828 [INFO][3991] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia48937da2b4 ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.837 [INFO][3991] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.838 [INFO][3991] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"704976ec-fdca-4788-bd96-1a541f0cf01c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b", Pod:"calico-apiserver-7c858d548c-cmrkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia48937da2b4", MAC:"b2:12:36:8d:f3:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:24.858769 env[1313]: 2025-11-01 00:46:24.853 [INFO][3991] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b" Namespace="calico-apiserver" Pod="calico-apiserver-7c858d548c-cmrkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:24.859819 env[1313]: time="2025-11-01T00:46:24.859762135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:24.860921 kubelet[2110]: E1101 00:46:24.860118 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:24.860921 kubelet[2110]: E1101 00:46:24.860188 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:24.860921 kubelet[2110]: E1101 00:46:24.860517 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4n2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-69qzg_calico-apiserver(ffdf82e5-9850-41df-9576-1cf8a00ef8fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:24.861517 env[1313]: time="2025-11-01T00:46:24.861480821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:46:24.862642 kubelet[2110]: E1101 00:46:24.862570 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:24.875000 audit[4081]: NETFILTER_CFG table=filter:116 family=2 entries=41 op=nft_register_chain pid=4081 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:24.875000 audit[4081]: SYSCALL arch=c000003e syscall=46 success=yes exit=23060 a0=3 a1=7ffc03f0fd80 a2=0 a3=7ffc03f0fd6c items=0 ppid=3511 pid=4081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.875000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:24.882579 env[1313]: time="2025-11-01T00:46:24.881796686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:24.882579 env[1313]: time="2025-11-01T00:46:24.881872147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:24.882579 env[1313]: time="2025-11-01T00:46:24.881888519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:24.882579 env[1313]: time="2025-11-01T00:46:24.882454902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b pid=4088 runtime=io.containerd.runc.v2 Nov 1 00:46:24.891104 kubelet[2110]: E1101 00:46:24.890682 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:24.898033 env[1313]: time="2025-11-01T00:46:24.897937411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s979b,Uid:6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c,Namespace:kube-system,Attempt:1,} returns sandbox id \"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62\"" Nov 1 00:46:24.899974 kubelet[2110]: E1101 00:46:24.899858 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:24.905759 env[1313]: time="2025-11-01T00:46:24.905693975Z" level=info msg="CreateContainer within sandbox \"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:46:24.928116 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:24.958000 audit[4121]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=4121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:24.958000 audit[4121]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd5cc7e70 a2=0 a3=7ffcd5cc7e5c items=0 ppid=2264 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.958000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:24.968000 audit[4121]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=4121 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:24.968000 audit[4121]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd5cc7e70 a2=0 a3=0 items=0 ppid=2264 pid=4121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:24.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:24.970047 env[1313]: time="2025-11-01T00:46:24.970003139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c858d548c-cmrkr,Uid:704976ec-fdca-4788-bd96-1a541f0cf01c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b\"" Nov 1 00:46:24.982410 env[1313]: time="2025-11-01T00:46:24.979402387Z" level=info msg="CreateContainer within sandbox \"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f544ed059996bd3f6bd1771c9cb4d92e3e68785943185aefb0f0bca74315557e\"" Nov 1 00:46:24.982410 env[1313]: time="2025-11-01T00:46:24.981471723Z" level=info msg="StartContainer for \"f544ed059996bd3f6bd1771c9cb4d92e3e68785943185aefb0f0bca74315557e\"" Nov 1 00:46:25.071338 env[1313]: time="2025-11-01T00:46:25.071267129Z" level=info msg="StartContainer for \"f544ed059996bd3f6bd1771c9cb4d92e3e68785943185aefb0f0bca74315557e\" returns successfully" Nov 1 00:46:25.184155 env[1313]: time="2025-11-01T00:46:25.184025414Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:25.245810 env[1313]: time="2025-11-01T00:46:25.245428173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:46:25.246162 kubelet[2110]: E1101 00:46:25.245814 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:25.246162 kubelet[2110]: E1101 00:46:25.245869 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:25.246162 kubelet[2110]: E1101 00:46:25.246121 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:25.246782 env[1313]: time="2025-11-01T00:46:25.246511347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:25.247849 kubelet[2110]: E1101 00:46:25.247770 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:46:25.270833 env[1313]: time="2025-11-01T00:46:25.269769142Z" level=info msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" Nov 1 00:46:25.437522 systemd-networkd[1077]: calib4b22c13b23: Gained IPv6LL Nov 1 00:46:25.501114 systemd-networkd[1077]: cali8d1afc48e0d: Gained IPv6LL Nov 1 00:46:25.723392 env[1313]: time="2025-11-01T00:46:25.723296485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.649 [INFO][4172] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.650 [INFO][4172] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" iface="eth0" netns="/var/run/netns/cni-bf532d33-635a-c2c2-f6a2-f099ab84a197" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.650 [INFO][4172] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" iface="eth0" netns="/var/run/netns/cni-bf532d33-635a-c2c2-f6a2-f099ab84a197" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.650 [INFO][4172] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" iface="eth0" netns="/var/run/netns/cni-bf532d33-635a-c2c2-f6a2-f099ab84a197" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.651 [INFO][4172] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.651 [INFO][4172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.701 [INFO][4180] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.701 [INFO][4180] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.701 [INFO][4180] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.712 [WARNING][4180] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.712 [INFO][4180] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.718 [INFO][4180] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:25.727561 env[1313]: 2025-11-01 00:46:25.724 [INFO][4172] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:25.736236 env[1313]: time="2025-11-01T00:46:25.733013307Z" level=info msg="TearDown network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" successfully" Nov 1 00:46:25.736236 env[1313]: time="2025-11-01T00:46:25.733068110Z" level=info msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" returns successfully" Nov 1 00:46:25.736236 env[1313]: time="2025-11-01T00:46:25.734266069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfbds,Uid:6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67,Namespace:kube-system,Attempt:1,}" Nov 1 00:46:25.735723 systemd[1]: run-netns-cni\x2dbf532d33\x2d635a\x2dc2c2\x2df6a2\x2df099ab84a197.mount: Deactivated successfully. Nov 1 00:46:25.736621 kubelet[2110]: E1101 00:46:25.733544 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:25.763987 env[1313]: time="2025-11-01T00:46:25.763746416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:25.764173 kubelet[2110]: E1101 00:46:25.764089 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:25.764173 kubelet[2110]: E1101 00:46:25.764164 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:25.765889 kubelet[2110]: E1101 00:46:25.764355 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46898,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-cmrkr_calico-apiserver(704976ec-fdca-4788-bd96-1a541f0cf01c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:25.767858 kubelet[2110]: E1101 00:46:25.767824 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:25.897395 kubelet[2110]: E1101 00:46:25.896832 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:25.900319 kubelet[2110]: E1101 00:46:25.900250 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:25.900649 kubelet[2110]: E1101 00:46:25.900612 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:25.901188 kubelet[2110]: E1101 00:46:25.901101 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:46:26.004000 audit[4188]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:26.004000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd04f44360 a2=0 a3=7ffd04f4434c items=0 ppid=2264 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:26.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:26.008000 audit[4188]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:26.008000 audit[4188]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd04f44360 a2=0 a3=0 items=0 ppid=2264 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:26.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:26.013539 systemd-networkd[1077]: cali84a508d0077: Gained IPv6LL Nov 1 00:46:26.071935 kubelet[2110]: I1101 00:46:26.071861 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s979b" podStartSLOduration=45.071839519 podStartE2EDuration="45.071839519s" podCreationTimestamp="2025-11-01 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:46:25.979243155 +0000 UTC m=+49.838700945" watchObservedRunningTime="2025-11-01 00:46:26.071839519 +0000 UTC m=+49.931297329" Nov 1 00:46:26.150000 audit[4202]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:26.150000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff125c7ad0 a2=0 a3=7fff125c7abc items=0 ppid=2264 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:26.150000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:26.154000 audit[4202]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:26.154000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff125c7ad0 a2=0 a3=0 items=0 ppid=2264 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:26.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:26.271121 env[1313]: time="2025-11-01T00:46:26.270990574Z" level=info msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" Nov 1 00:46:26.271121 env[1313]: time="2025-11-01T00:46:26.271073640Z" level=info msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" Nov 1 00:46:26.332622 systemd-networkd[1077]: calia48937da2b4: Gained IPv6LL Nov 1 00:46:26.901794 kubelet[2110]: E1101 00:46:26.901548 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:26.903225 kubelet[2110]: E1101 00:46:26.903182 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" iface="eth0" netns="/var/run/netns/cni-9736d769-e27d-fee9-dfdf-be9bdf60563b" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" iface="eth0" netns="/var/run/netns/cni-9736d769-e27d-fee9-dfdf-be9bdf60563b" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" iface="eth0" netns="/var/run/netns/cni-9736d769-e27d-fee9-dfdf-be9bdf60563b" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.865 [INFO][4228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.890 [INFO][4249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.891 [INFO][4249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.891 [INFO][4249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.915 [WARNING][4249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:26.915 [INFO][4249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:27.224 [INFO][4249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:27.238654 env[1313]: 2025-11-01 00:46:27.227 [INFO][4228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:27.244391 systemd[1]: run-netns-cni\x2d9736d769\x2de27d\x2dfee9\x2ddfdf\x2dbe9bdf60563b.mount: Deactivated successfully. Nov 1 00:46:27.244925 env[1313]: time="2025-11-01T00:46:27.244495831Z" level=info msg="TearDown network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" successfully" Nov 1 00:46:27.244925 env[1313]: time="2025-11-01T00:46:27.244554371Z" level=info msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" returns successfully" Nov 1 00:46:27.247812 env[1313]: time="2025-11-01T00:46:27.247759077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcb6947d5-ljzpr,Uid:6fe081ef-ff27-4230-8865-b572345e2224,Namespace:calico-system,Attempt:1,}" Nov 1 00:46:27.250000 audit[4275]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:27.250000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe2885edb0 a2=0 a3=7ffe2885ed9c items=0 ppid=2264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:27.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:27.258000 audit[4275]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:27.258000 audit[4275]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe2885edb0 a2=0 a3=0 items=0 ppid=2264 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:27.258000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:27.905361 kubelet[2110]: E1101 00:46:27.905297 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.866 [INFO][4229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.866 [INFO][4229] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" iface="eth0" netns="/var/run/netns/cni-6c1a84d2-476d-0075-7dab-690b16941c9a" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.866 [INFO][4229] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" iface="eth0" netns="/var/run/netns/cni-6c1a84d2-476d-0075-7dab-690b16941c9a" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.867 [INFO][4229] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" iface="eth0" netns="/var/run/netns/cni-6c1a84d2-476d-0075-7dab-690b16941c9a" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.867 [INFO][4229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.867 [INFO][4229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.901 [INFO][4255] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:26.905 [INFO][4255] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:27.228 [INFO][4255] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:27.700 [WARNING][4255] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:27.700 [INFO][4255] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:27.911 [INFO][4255] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:27.931488 env[1313]: 2025-11-01 00:46:27.928 [INFO][4229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:27.942389 systemd[1]: run-netns-cni\x2d6c1a84d2\x2d476d\x2d0075\x2d7dab\x2d690b16941c9a.mount: Deactivated successfully. Nov 1 00:46:27.945787 env[1313]: time="2025-11-01T00:46:27.945737795Z" level=info msg="TearDown network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" successfully" Nov 1 00:46:27.945912 env[1313]: time="2025-11-01T00:46:27.945883688Z" level=info msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" returns successfully" Nov 1 00:46:27.946857 env[1313]: time="2025-11-01T00:46:27.946823753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zk5w7,Uid:323323dc-c361-4116-a022-8e5f45430869,Namespace:calico-system,Attempt:1,}" Nov 1 00:46:28.084993 systemd-networkd[1077]: cali7b78ab7e26f: Link UP Nov 1 00:46:28.093874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:46:28.094137 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7b78ab7e26f: link becomes ready Nov 1 00:46:28.096200 systemd-networkd[1077]: cali7b78ab7e26f: Gained carrier Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:26.677 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jfbds-eth0 coredns-668d6bf9bc- kube-system 6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67 977 0 2025-11-01 00:45:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jfbds eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7b78ab7e26f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:26.677 [INFO][4192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:26.931 [INFO][4266] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" HandleID="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:26.931 [INFO][4266] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" HandleID="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jfbds", "timestamp":"2025-11-01 00:46:26.931084686 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:26.931 [INFO][4266] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.911 [INFO][4266] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.911 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.948 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.967 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.982 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.987 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.997 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:27.997 [INFO][4266] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.004 [INFO][4266] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5 Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.016 [INFO][4266] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.043 [INFO][4266] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.043 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" host="localhost" Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.043 [INFO][4266] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:28.123756 env[1313]: 2025-11-01 00:46:28.043 [INFO][4266] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" HandleID="k8s-pod-network.8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.061 [INFO][4192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfbds-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jfbds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b78ab7e26f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.061 [INFO][4192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.061 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b78ab7e26f ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.097 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.098 [INFO][4192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfbds-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5", Pod:"coredns-668d6bf9bc-jfbds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b78ab7e26f", MAC:"da:e6:98:fd:83:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.124717 env[1313]: 2025-11-01 00:46:28.120 [INFO][4192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5" Namespace="kube-system" Pod="coredns-668d6bf9bc-jfbds" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:28.151000 audit[4335]: NETFILTER_CFG table=filter:125 family=2 entries=40 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:28.154937 kernel: kauditd_printk_skb: 583 callbacks suppressed Nov 1 00:46:28.155043 kernel: audit: type=1325 audit(1761957988.151:408): table=filter:125 family=2 entries=40 op=nft_register_chain pid=4335 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:28.151000 audit[4335]: SYSCALL arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7fffe67b5df0 a2=0 a3=7fffe67b5ddc items=0 ppid=3511 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.167290 env[1313]: time="2025-11-01T00:46:28.167052943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:28.167290 env[1313]: time="2025-11-01T00:46:28.167129196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:28.167290 env[1313]: time="2025-11-01T00:46:28.167143012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:28.168880 env[1313]: time="2025-11-01T00:46:28.167667116Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5 pid=4338 runtime=io.containerd.runc.v2 Nov 1 00:46:28.151000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:28.178584 kernel: audit: type=1300 audit(1761957988.151:408): arch=c000003e syscall=46 success=yes exit=20328 a0=3 a1=7fffe67b5df0 a2=0 a3=7fffe67b5ddc items=0 ppid=3511 pid=4335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.178744 kernel: audit: type=1327 audit(1761957988.151:408): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:28.206820 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:28.232289 systemd-networkd[1077]: calia4f08285b02: Link UP Nov 1 00:46:28.238626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia4f08285b02: link becomes ready Nov 1 00:46:28.238999 systemd-networkd[1077]: calia4f08285b02: Gained carrier Nov 1 00:46:28.272253 env[1313]: time="2025-11-01T00:46:28.272193598Z" level=info msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" Nov 1 00:46:28.280954 env[1313]: time="2025-11-01T00:46:28.280805303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jfbds,Uid:6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67,Namespace:kube-system,Attempt:1,} returns sandbox id \"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5\"" Nov 1 00:46:28.282294 kubelet[2110]: E1101 00:46:28.282129 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:28.306430 kernel: audit: type=1325 audit(1761957988.280:409): table=filter:126 family=2 entries=17 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:28.306631 kernel: audit: type=1300 audit(1761957988.280:409): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8f112490 a2=0 a3=7ffc8f11247c items=0 ppid=2264 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.280000 audit[4386]: NETFILTER_CFG table=filter:126 family=2 entries=17 op=nft_register_rule pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:28.280000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8f112490 a2=0 a3=7ffc8f11247c items=0 ppid=2264 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.036 [INFO][4278] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0 calico-kube-controllers-5dcb6947d5- calico-system 6fe081ef-ff27-4230-8865-b572345e2224 1002 0 2025-11-01 00:45:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dcb6947d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5dcb6947d5-ljzpr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia4f08285b02 [] [] }} ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.036 [INFO][4278] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.129 [INFO][4306] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" HandleID="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.129 [INFO][4306] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" HandleID="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaa0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5dcb6947d5-ljzpr", "timestamp":"2025-11-01 00:46:28.129491688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.129 [INFO][4306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.129 [INFO][4306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.130 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.142 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.151 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.180 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.185 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.193 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.193 [INFO][4306] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.195 [INFO][4306] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.205 [INFO][4306] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.217 [INFO][4306] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.217 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" host="localhost" Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.217 [INFO][4306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:28.306878 env[1313]: 2025-11-01 00:46:28.217 [INFO][4306] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" HandleID="k8s-pod-network.ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.226 [INFO][4278] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0", GenerateName:"calico-kube-controllers-5dcb6947d5-", Namespace:"calico-system", SelfLink:"", UID:"6fe081ef-ff27-4230-8865-b572345e2224", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcb6947d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5dcb6947d5-ljzpr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4f08285b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.226 [INFO][4278] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.226 [INFO][4278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4f08285b02 ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.239 [INFO][4278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.253 [INFO][4278] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0", GenerateName:"calico-kube-controllers-5dcb6947d5-", Namespace:"calico-system", SelfLink:"", UID:"6fe081ef-ff27-4230-8865-b572345e2224", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcb6947d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e", Pod:"calico-kube-controllers-5dcb6947d5-ljzpr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4f08285b02", MAC:"0a:b6:b3:45:07:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.307713 env[1313]: 2025-11-01 00:46:28.278 [INFO][4278] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e" Namespace="calico-system" Pod="calico-kube-controllers-5dcb6947d5-ljzpr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:28.307713 env[1313]: time="2025-11-01T00:46:28.287507333Z" level=info msg="CreateContainer within sandbox \"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:46:28.280000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:28.314668 env[1313]: time="2025-11-01T00:46:28.314576434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:28.314813 env[1313]: time="2025-11-01T00:46:28.314666883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:28.314813 env[1313]: time="2025-11-01T00:46:28.314697781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:28.315009 env[1313]: time="2025-11-01T00:46:28.314970684Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e pid=4414 runtime=io.containerd.runc.v2 Nov 1 00:46:28.316475 kernel: audit: type=1327 audit(1761957988.280:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:28.318374 kernel: audit: type=1325 audit(1761957988.306:410): table=nat:127 family=2 entries=35 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:28.306000 audit[4386]: NETFILTER_CFG table=nat:127 family=2 entries=35 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:28.306000 audit[4386]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8f112490 a2=0 a3=7ffc8f11247c items=0 ppid=2264 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.339554 kernel: audit: type=1300 audit(1761957988.306:410): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8f112490 a2=0 a3=7ffc8f11247c items=0 ppid=2264 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.339697 kernel: audit: type=1327 audit(1761957988.306:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:28.306000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:28.348274 kernel: audit: type=1325 audit(1761957988.333:411): table=filter:128 family=2 entries=54 op=nft_register_chain pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:28.333000 audit[4425]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:28.333000 audit[4425]: SYSCALL arch=c000003e syscall=46 success=yes exit=25976 a0=3 a1=7ffe9d0d5f60 a2=0 a3=7ffe9d0d5f4c items=0 ppid=3511 pid=4425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.333000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:28.353996 env[1313]: time="2025-11-01T00:46:28.351712029Z" level=info msg="CreateContainer within sandbox \"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"528c023defc581aec36ece9228eeb4184f7c0deecab75706e111ad3587df9070\"" Nov 1 00:46:28.355267 env[1313]: time="2025-11-01T00:46:28.355234210Z" level=info msg="StartContainer for \"528c023defc581aec36ece9228eeb4184f7c0deecab75706e111ad3587df9070\"" Nov 1 00:46:28.375390 systemd-networkd[1077]: cali99bb7c59c79: Link UP Nov 1 00:46:28.380812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali99bb7c59c79: link becomes ready Nov 1 00:46:28.380258 systemd-networkd[1077]: cali99bb7c59c79: Gained carrier Nov 1 00:46:28.405830 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.130 [INFO][4295] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zk5w7-eth0 csi-node-driver- calico-system 323323dc-c361-4116-a022-8e5f45430869 1001 0 2025-11-01 00:45:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zk5w7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali99bb7c59c79 [] [] }} ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.131 [INFO][4295] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.202 [INFO][4334] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" HandleID="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.202 [INFO][4334] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" HandleID="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a47d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zk5w7", "timestamp":"2025-11-01 00:46:28.202448843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.202 [INFO][4334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.220 [INFO][4334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.220 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.248 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.265 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.291 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.300 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.305 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.305 [INFO][4334] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.318 [INFO][4334] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29 Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.340 [INFO][4334] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.358 [INFO][4334] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.359 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" host="localhost" Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.359 [INFO][4334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:28.416172 env[1313]: 2025-11-01 00:46:28.359 [INFO][4334] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" HandleID="k8s-pod-network.28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.364 [INFO][4295] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zk5w7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"323323dc-c361-4116-a022-8e5f45430869", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zk5w7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99bb7c59c79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.365 [INFO][4295] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.365 [INFO][4295] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99bb7c59c79 ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.384 [INFO][4295] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.384 [INFO][4295] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zk5w7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"323323dc-c361-4116-a022-8e5f45430869", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29", Pod:"csi-node-driver-zk5w7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99bb7c59c79", MAC:"c2:cd:d6:e5:fd:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:28.417120 env[1313]: 2025-11-01 00:46:28.406 [INFO][4295] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29" Namespace="calico-system" Pod="csi-node-driver-zk5w7" WorkloadEndpoint="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:28.436000 audit[4476]: NETFILTER_CFG table=filter:129 family=2 entries=48 op=nft_register_chain pid=4476 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:28.436000 audit[4476]: SYSCALL arch=c000003e syscall=46 success=yes exit=23108 a0=3 a1=7fff617e9f50 a2=0 a3=7fff617e9f3c items=0 ppid=3511 pid=4476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:28.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:28.456082 env[1313]: time="2025-11-01T00:46:28.455167521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:28.457535 env[1313]: time="2025-11-01T00:46:28.457450055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:28.458467 env[1313]: time="2025-11-01T00:46:28.458410989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:28.459728 env[1313]: time="2025-11-01T00:46:28.459516895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29 pid=4484 runtime=io.containerd.runc.v2 Nov 1 00:46:28.461849 env[1313]: time="2025-11-01T00:46:28.461294161Z" level=info msg="StartContainer for \"528c023defc581aec36ece9228eeb4184f7c0deecab75706e111ad3587df9070\" returns successfully" Nov 1 00:46:28.509108 env[1313]: time="2025-11-01T00:46:28.506450132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dcb6947d5-ljzpr,Uid:6fe081ef-ff27-4230-8865-b572345e2224,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e\"" Nov 1 00:46:28.513193 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:28.516566 env[1313]: time="2025-11-01T00:46:28.516515755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:46:28.541606 env[1313]: time="2025-11-01T00:46:28.539188303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zk5w7,Uid:323323dc-c361-4116-a022-8e5f45430869,Namespace:calico-system,Attempt:1,} returns sandbox id \"28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29\"" Nov 1 00:46:28.929458 kubelet[2110]: E1101 00:46:28.929397 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:28.929949 kubelet[2110]: E1101 00:46:28.929747 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:28.974071 env[1313]: time="2025-11-01T00:46:28.973540189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:29.072626 env[1313]: time="2025-11-01T00:46:29.072541312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:46:29.073099 kubelet[2110]: E1101 00:46:29.073056 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:29.073189 kubelet[2110]: E1101 00:46:29.073110 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:29.073543 kubelet[2110]: E1101 00:46:29.073475 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mfw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcb6947d5-ljzpr_calico-system(6fe081ef-ff27-4230-8865-b572345e2224): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:29.073775 env[1313]: time="2025-11-01T00:46:29.073684348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:46:29.075641 kubelet[2110]: E1101 00:46:29.075584 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.447 [INFO][4403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.448 [INFO][4403] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" iface="eth0" netns="/var/run/netns/cni-dc70a05d-77a4-521a-451d-e8765330d547" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.448 [INFO][4403] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" iface="eth0" netns="/var/run/netns/cni-dc70a05d-77a4-521a-451d-e8765330d547" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.448 [INFO][4403] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" iface="eth0" netns="/var/run/netns/cni-dc70a05d-77a4-521a-451d-e8765330d547" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.448 [INFO][4403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.448 [INFO][4403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.546 [INFO][4494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.547 [INFO][4494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:28.547 [INFO][4494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:29.072 [WARNING][4494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:29.072 [INFO][4494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:29.078 [INFO][4494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:29.083977 env[1313]: 2025-11-01 00:46:29.080 [INFO][4403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:29.085452 env[1313]: time="2025-11-01T00:46:29.085294008Z" level=info msg="TearDown network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" successfully" Nov 1 00:46:29.085452 env[1313]: time="2025-11-01T00:46:29.085365211Z" level=info msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" returns successfully" Nov 1 00:46:29.086314 env[1313]: time="2025-11-01T00:46:29.086271854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bbnwx,Uid:5fbcbf90-e90d-4d2e-bb2c-68aa5206a338,Namespace:calico-system,Attempt:1,}" Nov 1 00:46:29.225982 kubelet[2110]: I1101 00:46:29.219275 2110 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jfbds" podStartSLOduration=48.219196346 podStartE2EDuration="48.219196346s" podCreationTimestamp="2025-11-01 00:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:46:29.218110097 +0000 UTC m=+53.077567887" watchObservedRunningTime="2025-11-01 00:46:29.219196346 +0000 UTC m=+53.078654166" Nov 1 00:46:29.254040 systemd[1]: run-netns-cni\x2ddc70a05d\x2d77a4\x2d521a\x2d451d\x2de8765330d547.mount: Deactivated successfully. Nov 1 00:46:29.320000 audit[4561]: NETFILTER_CFG table=filter:130 family=2 entries=14 op=nft_register_rule pid=4561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:29.320000 audit[4561]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcca5f6ff0 a2=0 a3=7ffcca5f6fdc items=0 ppid=2264 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:29.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:29.326000 audit[4561]: NETFILTER_CFG table=nat:131 family=2 entries=44 op=nft_register_rule pid=4561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:29.326000 audit[4561]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcca5f6ff0 a2=0 a3=7ffcca5f6fdc items=0 ppid=2264 pid=4561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:29.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:29.455471 env[1313]: time="2025-11-01T00:46:29.455273412Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:29.539488 systemd-networkd[1077]: cali99bb7c59c79: Gained IPv6LL Nov 1 00:46:29.539801 systemd-networkd[1077]: cali7b78ab7e26f: Gained IPv6LL Nov 1 00:46:29.574064 env[1313]: time="2025-11-01T00:46:29.573952598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:46:29.575337 kubelet[2110]: E1101 00:46:29.574290 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:29.575337 kubelet[2110]: E1101 00:46:29.574402 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:29.575337 kubelet[2110]: E1101 00:46:29.574561 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:29.580977 env[1313]: time="2025-11-01T00:46:29.580905858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:46:29.856399 systemd-networkd[1077]: calia4f08285b02: Gained IPv6LL Nov 1 00:46:29.933857 kubelet[2110]: E1101 00:46:29.933818 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:29.935006 kubelet[2110]: E1101 00:46:29.934796 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:29.986738 env[1313]: time="2025-11-01T00:46:29.986652377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:30.031533 env[1313]: time="2025-11-01T00:46:30.031438935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:46:30.031803 kubelet[2110]: E1101 00:46:30.031740 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:30.031862 kubelet[2110]: E1101 00:46:30.031814 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:30.032014 kubelet[2110]: E1101 00:46:30.031967 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:30.033229 kubelet[2110]: E1101 00:46:30.033176 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:30.086096 systemd-networkd[1077]: cali090a3938826: Link UP Nov 1 00:46:30.090944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:46:30.091058 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali090a3938826: link becomes ready Nov 1 00:46:30.092954 systemd-networkd[1077]: cali090a3938826: Gained carrier Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.377 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--bbnwx-eth0 goldmane-666569f655- calico-system 5fbcbf90-e90d-4d2e-bb2c-68aa5206a338 1032 0 2025-11-01 00:45:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-bbnwx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali090a3938826 [] [] }} ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.377 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.425 [INFO][4567] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" HandleID="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.426 [INFO][4567] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" HandleID="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000360580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-bbnwx", "timestamp":"2025-11-01 00:46:29.425715854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.426 [INFO][4567] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.426 [INFO][4567] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.426 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.882 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:29.968 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.044 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.054 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.059 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.059 [INFO][4567] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.064 [INFO][4567] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957 Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.070 [INFO][4567] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.078 [INFO][4567] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.078 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" host="localhost" Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.078 [INFO][4567] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:30.123240 env[1313]: 2025-11-01 00:46:30.078 [INFO][4567] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" HandleID="k8s-pod-network.ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.081 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bbnwx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-bbnwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali090a3938826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.081 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.081 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali090a3938826 ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.092 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.094 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bbnwx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957", Pod:"goldmane-666569f655-bbnwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali090a3938826", MAC:"f2:07:de:d8:86:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:30.123923 env[1313]: 2025-11-01 00:46:30.117 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957" Namespace="calico-system" Pod="goldmane-666569f655-bbnwx" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:30.165217 env[1313]: time="2025-11-01T00:46:30.165132559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:46:30.165217 env[1313]: time="2025-11-01T00:46:30.165216898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:46:30.165441 env[1313]: time="2025-11-01T00:46:30.165240291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:46:30.165494 env[1313]: time="2025-11-01T00:46:30.165459863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957 pid=4589 runtime=io.containerd.runc.v2 Nov 1 00:46:30.228000 audit[4622]: NETFILTER_CFG table=filter:132 family=2 entries=60 op=nft_register_chain pid=4622 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:46:30.228000 audit[4622]: SYSCALL arch=c000003e syscall=46 success=yes exit=29900 a0=3 a1=7ffdaa0a98d0 a2=0 a3=7ffdaa0a98bc items=0 ppid=3511 pid=4622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:30.228000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:46:30.236432 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:46:30.245230 systemd[1]: run-containerd-runc-k8s.io-ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957-runc.5BhF9p.mount: Deactivated successfully. Nov 1 00:46:30.267793 env[1313]: time="2025-11-01T00:46:30.267737053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-bbnwx,Uid:5fbcbf90-e90d-4d2e-bb2c-68aa5206a338,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957\"" Nov 1 00:46:30.271051 env[1313]: time="2025-11-01T00:46:30.271013362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:46:30.348000 audit[4630]: NETFILTER_CFG table=filter:133 family=2 entries=14 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:30.348000 audit[4630]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6d1dd520 a2=0 a3=7ffe6d1dd50c items=0 ppid=2264 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:30.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:30.363000 audit[4630]: NETFILTER_CFG table=nat:134 family=2 entries=56 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:30.363000 audit[4630]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe6d1dd520 a2=0 a3=7ffe6d1dd50c items=0 ppid=2264 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:30.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:30.447423 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:42240.service. Nov 1 00:46:30.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.128:22-10.0.0.1:42240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:30.516000 audit[4632]: USER_ACCT pid=4632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.516779 sshd[4632]: Accepted publickey for core from 10.0.0.1 port 42240 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:30.517000 audit[4632]: CRED_ACQ pid=4632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.517000 audit[4632]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee4cd9fc0 a2=3 a3=0 items=0 ppid=1 pid=4632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:30.517000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:30.518892 sshd[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:30.522951 systemd-logind[1290]: New session 8 of user core. Nov 1 00:46:30.523719 systemd[1]: Started session-8.scope. Nov 1 00:46:30.529000 audit[4632]: USER_START pid=4632 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.530000 audit[4635]: CRED_ACQ pid=4635 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.663925 sshd[4632]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:30.664000 audit[4632]: USER_END pid=4632 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.664000 audit[4632]: CRED_DISP pid=4632 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:30.666300 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:42240.service: Deactivated successfully. Nov 1 00:46:30.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.128:22-10.0.0.1:42240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:30.667590 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:46:30.667621 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:46:30.668489 systemd-logind[1290]: Removed session 8. Nov 1 00:46:30.704385 env[1313]: time="2025-11-01T00:46:30.704179361Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:30.706480 env[1313]: time="2025-11-01T00:46:30.706410880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:46:30.706780 kubelet[2110]: E1101 00:46:30.706735 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:30.706863 kubelet[2110]: E1101 00:46:30.706801 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:30.707048 kubelet[2110]: E1101 00:46:30.706988 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bbnwx_calico-system(5fbcbf90-e90d-4d2e-bb2c-68aa5206a338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:30.708157 kubelet[2110]: E1101 00:46:30.708124 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:30.937669 kubelet[2110]: E1101 00:46:30.937625 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:30.938364 kubelet[2110]: E1101 00:46:30.938306 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:30.938777 kubelet[2110]: E1101 00:46:30.938727 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:31.260525 systemd-networkd[1077]: cali090a3938826: Gained IPv6LL Nov 1 00:46:31.378000 audit[4648]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:31.378000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5d6a41c0 a2=0 a3=7ffc5d6a41ac items=0 ppid=2264 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:31.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:31.383000 audit[4648]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:46:31.383000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc5d6a41c0 a2=0 a3=7ffc5d6a41ac items=0 ppid=2264 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:31.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:46:31.940614 kubelet[2110]: E1101 00:46:31.940571 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:35.670125 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:42244.service. Nov 1 00:46:35.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:42244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:35.675048 kernel: kauditd_printk_skb: 37 callbacks suppressed Nov 1 00:46:35.676359 kernel: audit: type=1130 audit(1761957995.671:429): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:42244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:35.706000 audit[4657]: USER_ACCT pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.708093 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 42244 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:35.710274 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:35.716465 systemd[1]: Started session-9.scope. Nov 1 00:46:35.709000 audit[4657]: CRED_ACQ pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.718091 systemd-logind[1290]: New session 9 of user core. Nov 1 00:46:35.725795 kernel: audit: type=1101 audit(1761957995.706:430): pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.725906 kernel: audit: type=1103 audit(1761957995.709:431): pid=4657 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.725956 kernel: audit: type=1006 audit(1761957995.709:432): pid=4657 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Nov 1 00:46:35.709000 audit[4657]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9f5299d0 a2=3 a3=0 items=0 ppid=1 pid=4657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:35.759333 kernel: audit: type=1300 audit(1761957995.709:432): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9f5299d0 a2=3 a3=0 items=0 ppid=1 pid=4657 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:35.759520 kernel: audit: type=1327 audit(1761957995.709:432): proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:35.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:35.723000 audit[4657]: USER_START pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.772313 kernel: audit: type=1105 audit(1761957995.723:433): pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.772505 kernel: audit: type=1103 audit(1761957995.726:434): pid=4660 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.726000 audit[4660]: CRED_ACQ pid=4660 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.892173 sshd[4657]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:35.892000 audit[4657]: USER_END pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.894545 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:42244.service: Deactivated successfully. Nov 1 00:46:35.895551 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:46:35.900112 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:46:35.900939 systemd-logind[1290]: Removed session 9. Nov 1 00:46:35.893000 audit[4657]: CRED_DISP pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.917150 kernel: audit: type=1106 audit(1761957995.892:435): pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.917255 kernel: audit: type=1104 audit(1761957995.893:436): pid=4657 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:35.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.128:22-10.0.0.1:42244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:36.253295 env[1313]: time="2025-11-01T00:46:36.253246618Z" level=info msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.297 [WARNING][4682] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfbds-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5", Pod:"coredns-668d6bf9bc-jfbds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b78ab7e26f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.297 [INFO][4682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.297 [INFO][4682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" iface="eth0" netns="" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.298 [INFO][4682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.298 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.340 [INFO][4692] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.341 [INFO][4692] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.341 [INFO][4692] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.355 [WARNING][4692] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.355 [INFO][4692] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.358 [INFO][4692] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:36.363504 env[1313]: 2025-11-01 00:46:36.361 [INFO][4682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.364250 env[1313]: time="2025-11-01T00:46:36.363546903Z" level=info msg="TearDown network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" successfully" Nov 1 00:46:36.364250 env[1313]: time="2025-11-01T00:46:36.363588351Z" level=info msg="StopPodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" returns successfully" Nov 1 00:46:36.364472 env[1313]: time="2025-11-01T00:46:36.364418257Z" level=info msg="RemovePodSandbox for \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" Nov 1 00:46:36.364540 env[1313]: time="2025-11-01T00:46:36.364470816Z" level=info msg="Forcibly stopping sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\"" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.414 [WARNING][4710] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jfbds-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a5dd67d-d6ec-4bd2-9dac-f43fc3314f67", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dacfa0c4a5d0c5620233d44dc758e17bf26b41a780f59c14053aa94f4f8edf5", Pod:"coredns-668d6bf9bc-jfbds", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7b78ab7e26f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.414 [INFO][4710] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.414 [INFO][4710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" iface="eth0" netns="" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.414 [INFO][4710] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.414 [INFO][4710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.439 [INFO][4719] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.440 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.440 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.446 [WARNING][4719] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.446 [INFO][4719] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" HandleID="k8s-pod-network.4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Workload="localhost-k8s-coredns--668d6bf9bc--jfbds-eth0" Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.449 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:36.452719 env[1313]: 2025-11-01 00:46:36.450 [INFO][4710] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa" Nov 1 00:46:36.453338 env[1313]: time="2025-11-01T00:46:36.452757535Z" level=info msg="TearDown network for sandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" successfully" Nov 1 00:46:36.880622 env[1313]: time="2025-11-01T00:46:36.880548671Z" level=info msg="RemovePodSandbox \"4a48ae9ec8973deb46659af44aa4a65628f3f9ec5347c31c080a0a8d52ee7cfa\" returns successfully" Nov 1 00:46:36.881257 env[1313]: time="2025-11-01T00:46:36.881217296Z" level=info msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.912 [WARNING][4737] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bbnwx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957", Pod:"goldmane-666569f655-bbnwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali090a3938826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.912 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.912 [INFO][4737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" iface="eth0" netns="" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.912 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.912 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.960 [INFO][4747] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.960 [INFO][4747] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.960 [INFO][4747] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.968 [WARNING][4747] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.968 [INFO][4747] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.970 [INFO][4747] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:36.973035 env[1313]: 2025-11-01 00:46:36.971 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:36.973615 env[1313]: time="2025-11-01T00:46:36.973074434Z" level=info msg="TearDown network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" successfully" Nov 1 00:46:36.973615 env[1313]: time="2025-11-01T00:46:36.973113737Z" level=info msg="StopPodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" returns successfully" Nov 1 00:46:36.973666 env[1313]: time="2025-11-01T00:46:36.973620108Z" level=info msg="RemovePodSandbox for \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" Nov 1 00:46:36.973692 env[1313]: time="2025-11-01T00:46:36.973651477Z" level=info msg="Forcibly stopping sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\"" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.003 [WARNING][4765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--bbnwx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5fbcbf90-e90d-4d2e-bb2c-68aa5206a338", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba105215c48bf68bc29e525c05e16a676daa83d13ea02168e19ddfc673e27957", Pod:"goldmane-666569f655-bbnwx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali090a3938826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.004 [INFO][4765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.004 [INFO][4765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" iface="eth0" netns="" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.004 [INFO][4765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.004 [INFO][4765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.030 [INFO][4775] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.030 [INFO][4775] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.031 [INFO][4775] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.039 [WARNING][4775] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.039 [INFO][4775] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" HandleID="k8s-pod-network.9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Workload="localhost-k8s-goldmane--666569f655--bbnwx-eth0" Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.041 [INFO][4775] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.045931 env[1313]: 2025-11-01 00:46:37.044 [INFO][4765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db" Nov 1 00:46:37.046731 env[1313]: time="2025-11-01T00:46:37.045971844Z" level=info msg="TearDown network for sandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" successfully" Nov 1 00:46:37.339115 env[1313]: time="2025-11-01T00:46:37.339057292Z" level=info msg="RemovePodSandbox \"9d6a4ff5375e9dee030a9560e67521d69c889f7455951caa8c67e0f2b2e1c2db\" returns successfully" Nov 1 00:46:37.339724 env[1313]: time="2025-11-01T00:46:37.339665594Z" level=info msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.416 [WARNING][4793] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s979b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62", Pod:"coredns-668d6bf9bc-s979b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a508d0077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.416 [INFO][4793] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.416 [INFO][4793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" iface="eth0" netns="" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.416 [INFO][4793] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.416 [INFO][4793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.439 [INFO][4803] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.439 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.439 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.600 [WARNING][4803] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.600 [INFO][4803] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.601 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.606200 env[1313]: 2025-11-01 00:46:37.603 [INFO][4793] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.606200 env[1313]: time="2025-11-01T00:46:37.606138049Z" level=info msg="TearDown network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" successfully" Nov 1 00:46:37.619880 env[1313]: time="2025-11-01T00:46:37.606210545Z" level=info msg="StopPodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" returns successfully" Nov 1 00:46:37.619880 env[1313]: time="2025-11-01T00:46:37.606726704Z" level=info msg="RemovePodSandbox for \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" Nov 1 00:46:37.619880 env[1313]: time="2025-11-01T00:46:37.606754165Z" level=info msg="Forcibly stopping sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\"" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.644 [WARNING][4822] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s979b-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6dbb4ea3-15f9-43b9-bbdc-e92a3f607f2c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2744b8c6792f36a11e1d45331f56c4da21b4ae36abe1c4fcf9d5d2d34ff44e62", Pod:"coredns-668d6bf9bc-s979b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84a508d0077", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.644 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.644 [INFO][4822] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" iface="eth0" netns="" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.644 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.644 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.668 [INFO][4830] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.668 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.668 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.674 [WARNING][4830] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.674 [INFO][4830] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" HandleID="k8s-pod-network.78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Workload="localhost-k8s-coredns--668d6bf9bc--s979b-eth0" Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.675 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.678480 env[1313]: 2025-11-01 00:46:37.677 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317" Nov 1 00:46:37.678947 env[1313]: time="2025-11-01T00:46:37.678504371Z" level=info msg="TearDown network for sandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" successfully" Nov 1 00:46:37.713716 env[1313]: time="2025-11-01T00:46:37.713646448Z" level=info msg="RemovePodSandbox \"78af24ccbafbc3fe2805e32086a970ba748d84ceba4d9fcf426d832b1906a317\" returns successfully" Nov 1 00:46:37.714578 env[1313]: time="2025-11-01T00:46:37.714193495Z" level=info msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.746 [WARNING][4849] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zk5w7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"323323dc-c361-4116-a022-8e5f45430869", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29", Pod:"csi-node-driver-zk5w7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99bb7c59c79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.747 [INFO][4849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.747 [INFO][4849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" iface="eth0" netns="" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.747 [INFO][4849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.747 [INFO][4849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.768 [INFO][4858] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.769 [INFO][4858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.769 [INFO][4858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.775 [WARNING][4858] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.775 [INFO][4858] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.777 [INFO][4858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.781327 env[1313]: 2025-11-01 00:46:37.779 [INFO][4849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.781982 env[1313]: time="2025-11-01T00:46:37.781376531Z" level=info msg="TearDown network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" successfully" Nov 1 00:46:37.781982 env[1313]: time="2025-11-01T00:46:37.781407830Z" level=info msg="StopPodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" returns successfully" Nov 1 00:46:37.781982 env[1313]: time="2025-11-01T00:46:37.781912206Z" level=info msg="RemovePodSandbox for \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" Nov 1 00:46:37.781982 env[1313]: time="2025-11-01T00:46:37.781942222Z" level=info msg="Forcibly stopping sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\"" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.832 [WARNING][4876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zk5w7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"323323dc-c361-4116-a022-8e5f45430869", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28a7330eeecbc526801cf8b62858a19617b67b08e4d659ac935e10b5b111cc29", Pod:"csi-node-driver-zk5w7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali99bb7c59c79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.832 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.832 [INFO][4876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" iface="eth0" netns="" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.832 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.832 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.853 [INFO][4884] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.853 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.853 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.860 [WARNING][4884] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.860 [INFO][4884] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" HandleID="k8s-pod-network.7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Workload="localhost-k8s-csi--node--driver--zk5w7-eth0" Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.862 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.865256 env[1313]: 2025-11-01 00:46:37.863 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca" Nov 1 00:46:37.865256 env[1313]: time="2025-11-01T00:46:37.865212103Z" level=info msg="TearDown network for sandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" successfully" Nov 1 00:46:37.873577 env[1313]: time="2025-11-01T00:46:37.873538586Z" level=info msg="RemovePodSandbox \"7246ed21bc671fc80d547e9ff96b714b56723571def31f8f182068ee838b16ca\" returns successfully" Nov 1 00:46:37.874232 env[1313]: time="2025-11-01T00:46:37.874204695Z" level=info msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.909 [WARNING][4901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffdf82e5-9850-41df-9576-1cf8a00ef8fd", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f", Pod:"calico-apiserver-7c858d548c-69qzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4b22c13b23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.909 [INFO][4901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.909 [INFO][4901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" iface="eth0" netns="" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.909 [INFO][4901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.909 [INFO][4901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.930 [INFO][4910] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.931 [INFO][4910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.931 [INFO][4910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.937 [WARNING][4910] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.937 [INFO][4910] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.938 [INFO][4910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:37.941968 env[1313]: 2025-11-01 00:46:37.940 [INFO][4901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:37.942596 env[1313]: time="2025-11-01T00:46:37.942007624Z" level=info msg="TearDown network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" successfully" Nov 1 00:46:37.942596 env[1313]: time="2025-11-01T00:46:37.942043181Z" level=info msg="StopPodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" returns successfully" Nov 1 00:46:37.942692 env[1313]: time="2025-11-01T00:46:37.942623390Z" level=info msg="RemovePodSandbox for \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" Nov 1 00:46:37.942783 env[1313]: time="2025-11-01T00:46:37.942678393Z" level=info msg="Forcibly stopping sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\"" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.976 [WARNING][4928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ffdf82e5-9850-41df-9576-1cf8a00ef8fd", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a2014592d5374ffef25b33c137d33f188fd6dcbf0fa5615243d541115ca9614f", Pod:"calico-apiserver-7c858d548c-69qzg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib4b22c13b23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.976 [INFO][4928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.976 [INFO][4928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" iface="eth0" netns="" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.976 [INFO][4928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.976 [INFO][4928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.995 [INFO][4937] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.995 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:37.995 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:38.000 [WARNING][4937] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:38.000 [INFO][4937] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" HandleID="k8s-pod-network.33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Workload="localhost-k8s-calico--apiserver--7c858d548c--69qzg-eth0" Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:38.002 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.005251 env[1313]: 2025-11-01 00:46:38.003 [INFO][4928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8" Nov 1 00:46:38.005722 env[1313]: time="2025-11-01T00:46:38.005284381Z" level=info msg="TearDown network for sandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" successfully" Nov 1 00:46:38.014039 env[1313]: time="2025-11-01T00:46:38.014002999Z" level=info msg="RemovePodSandbox \"33e3c78eeed54dfa10cf1875bdbb6f84aa8cb8d75ff47a04680d3a14eb4417a8\" returns successfully" Nov 1 00:46:38.014568 env[1313]: time="2025-11-01T00:46:38.014538815Z" level=info msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.045 [WARNING][4954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0", GenerateName:"calico-kube-controllers-5dcb6947d5-", Namespace:"calico-system", SelfLink:"", UID:"6fe081ef-ff27-4230-8865-b572345e2224", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcb6947d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e", Pod:"calico-kube-controllers-5dcb6947d5-ljzpr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4f08285b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.046 [INFO][4954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.046 [INFO][4954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" iface="eth0" netns="" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.046 [INFO][4954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.046 [INFO][4954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.064 [INFO][4963] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.064 [INFO][4963] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.064 [INFO][4963] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.072 [WARNING][4963] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.072 [INFO][4963] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.073 [INFO][4963] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.077224 env[1313]: 2025-11-01 00:46:38.075 [INFO][4954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.077844 env[1313]: time="2025-11-01T00:46:38.077264604Z" level=info msg="TearDown network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" successfully" Nov 1 00:46:38.077844 env[1313]: time="2025-11-01T00:46:38.077309138Z" level=info msg="StopPodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" returns successfully" Nov 1 00:46:38.077966 env[1313]: time="2025-11-01T00:46:38.077929702Z" level=info msg="RemovePodSandbox for \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" Nov 1 00:46:38.078008 env[1313]: time="2025-11-01T00:46:38.077970117Z" level=info msg="Forcibly stopping sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\"" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.108 [WARNING][4981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0", GenerateName:"calico-kube-controllers-5dcb6947d5-", Namespace:"calico-system", SelfLink:"", UID:"6fe081ef-ff27-4230-8865-b572345e2224", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dcb6947d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ba7d407c96e942f64c24e1fcb3cf87933e76ff92560749a5112f07ae6befcc9e", Pod:"calico-kube-controllers-5dcb6947d5-ljzpr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4f08285b02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.109 [INFO][4981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.109 [INFO][4981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" iface="eth0" netns="" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.109 [INFO][4981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.109 [INFO][4981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.132 [INFO][4991] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.132 [INFO][4991] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.132 [INFO][4991] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.175 [WARNING][4991] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.175 [INFO][4991] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" HandleID="k8s-pod-network.c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Workload="localhost-k8s-calico--kube--controllers--5dcb6947d5--ljzpr-eth0" Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.177 [INFO][4991] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.181467 env[1313]: 2025-11-01 00:46:38.179 [INFO][4981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9" Nov 1 00:46:38.181942 env[1313]: time="2025-11-01T00:46:38.181496549Z" level=info msg="TearDown network for sandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" successfully" Nov 1 00:46:38.186097 env[1313]: time="2025-11-01T00:46:38.186040635Z" level=info msg="RemovePodSandbox \"c0f66d4bbe5eb61cab56e8ea514aa99208d4e0768b561701e1c0823b7e31fbc9\" returns successfully" Nov 1 00:46:38.186722 env[1313]: time="2025-11-01T00:46:38.186691836Z" level=info msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.223 [WARNING][5009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"704976ec-fdca-4788-bd96-1a541f0cf01c", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b", Pod:"calico-apiserver-7c858d548c-cmrkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia48937da2b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.224 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.224 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" iface="eth0" netns="" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.224 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.224 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.244 [INFO][5017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.244 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.244 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.252 [WARNING][5017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.252 [INFO][5017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.254 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.258374 env[1313]: 2025-11-01 00:46:38.256 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.258866 env[1313]: time="2025-11-01T00:46:38.258400501Z" level=info msg="TearDown network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" successfully" Nov 1 00:46:38.258866 env[1313]: time="2025-11-01T00:46:38.258433983Z" level=info msg="StopPodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" returns successfully" Nov 1 00:46:38.259024 env[1313]: time="2025-11-01T00:46:38.258985498Z" level=info msg="RemovePodSandbox for \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" Nov 1 00:46:38.259097 env[1313]: time="2025-11-01T00:46:38.259032175Z" level=info msg="Forcibly stopping sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\"" Nov 1 00:46:38.275036 env[1313]: time="2025-11-01T00:46:38.274886381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.294 [WARNING][5035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0", GenerateName:"calico-apiserver-7c858d548c-", Namespace:"calico-apiserver", SelfLink:"", UID:"704976ec-fdca-4788-bd96-1a541f0cf01c", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 45, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c858d548c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0db8a7948d74f6f54779e2dfa35c25a4eac2f258b95a4aed262375e86a65b80b", Pod:"calico-apiserver-7c858d548c-cmrkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia48937da2b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.294 [INFO][5035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.294 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" iface="eth0" netns="" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.294 [INFO][5035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.294 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.315 [INFO][5044] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.315 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.315 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.321 [WARNING][5044] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.321 [INFO][5044] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" HandleID="k8s-pod-network.f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Workload="localhost-k8s-calico--apiserver--7c858d548c--cmrkr-eth0" Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.322 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.326589 env[1313]: 2025-11-01 00:46:38.324 [INFO][5035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d" Nov 1 00:46:38.327712 env[1313]: time="2025-11-01T00:46:38.326625438Z" level=info msg="TearDown network for sandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" successfully" Nov 1 00:46:38.331786 env[1313]: time="2025-11-01T00:46:38.331749150Z" level=info msg="RemovePodSandbox \"f06b1e1136a71a703e314baa39dd2dcabbfca8b11f5516d845f11ecceba01c4d\" returns successfully" Nov 1 00:46:38.332293 env[1313]: time="2025-11-01T00:46:38.332267744Z" level=info msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.363 [WARNING][5064] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" WorkloadEndpoint="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.363 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.363 [INFO][5064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" iface="eth0" netns="" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.363 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.363 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.384 [INFO][5072] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.384 [INFO][5072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.384 [INFO][5072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.390 [WARNING][5072] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.390 [INFO][5072] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.392 [INFO][5072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.395814 env[1313]: 2025-11-01 00:46:38.394 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.396446 env[1313]: time="2025-11-01T00:46:38.395853487Z" level=info msg="TearDown network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" successfully" Nov 1 00:46:38.396446 env[1313]: time="2025-11-01T00:46:38.395894784Z" level=info msg="StopPodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" returns successfully" Nov 1 00:46:38.396520 env[1313]: time="2025-11-01T00:46:38.396486985Z" level=info msg="RemovePodSandbox for \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" Nov 1 00:46:38.396564 env[1313]: time="2025-11-01T00:46:38.396528342Z" level=info msg="Forcibly stopping sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\"" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.424 [WARNING][5090] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" WorkloadEndpoint="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.425 [INFO][5090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.425 [INFO][5090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" iface="eth0" netns="" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.425 [INFO][5090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.425 [INFO][5090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.443 [INFO][5100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.443 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.444 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.450 [WARNING][5100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.450 [INFO][5100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" HandleID="k8s-pod-network.80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Workload="localhost-k8s-whisker--8977d49c--c9phz-eth0" Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.452 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:46:38.455212 env[1313]: 2025-11-01 00:46:38.453 [INFO][5090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74" Nov 1 00:46:38.455703 env[1313]: time="2025-11-01T00:46:38.455216136Z" level=info msg="TearDown network for sandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" successfully" Nov 1 00:46:38.459669 env[1313]: time="2025-11-01T00:46:38.459627192Z" level=info msg="RemovePodSandbox \"80ab6533acd3b9b39bd97e4ee3743e6fafca9e527146f8c0ef2a354bb593ae74\" returns successfully" Nov 1 00:46:38.626782 env[1313]: time="2025-11-01T00:46:38.626703288Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:38.628200 env[1313]: time="2025-11-01T00:46:38.628093678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:46:38.628444 kubelet[2110]: E1101 00:46:38.628362 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:38.628761 kubelet[2110]: E1101 00:46:38.628449 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:46:38.628761 kubelet[2110]: E1101 00:46:38.628568 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fa950cf830a346e59fecb654697ba8aa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:38.631988 env[1313]: time="2025-11-01T00:46:38.631943761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:46:38.955368 env[1313]: time="2025-11-01T00:46:38.955276992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:38.956635 env[1313]: time="2025-11-01T00:46:38.956570999Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:46:38.956885 kubelet[2110]: E1101 00:46:38.956834 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:38.956965 kubelet[2110]: E1101 00:46:38.956900 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:46:38.957060 kubelet[2110]: E1101 00:46:38.957024 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:38.958862 kubelet[2110]: E1101 00:46:38.958797 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:46:40.270866 env[1313]: time="2025-11-01T00:46:40.270799678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:40.582285 env[1313]: time="2025-11-01T00:46:40.582098276Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:40.632081 env[1313]: time="2025-11-01T00:46:40.631988664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:40.632479 kubelet[2110]: E1101 00:46:40.632418 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:40.632837 kubelet[2110]: E1101 00:46:40.632491 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:40.632837 kubelet[2110]: E1101 00:46:40.632715 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4n2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-69qzg_calico-apiserver(ffdf82e5-9850-41df-9576-1cf8a00ef8fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:40.634276 kubelet[2110]: E1101 00:46:40.634243 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:40.896537 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:38342.service. Nov 1 00:46:40.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:38342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:40.899398 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:46:40.899581 kernel: audit: type=1130 audit(1761958000.896:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:38342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:40.933000 audit[5109]: USER_ACCT pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.933788 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 38342 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:40.942654 sshd[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:40.941000 audit[5109]: CRED_ACQ pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.946582 systemd-logind[1290]: New session 10 of user core. Nov 1 00:46:40.947467 systemd[1]: Started session-10.scope. Nov 1 00:46:40.950380 kernel: audit: type=1101 audit(1761958000.933:439): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.950608 kernel: audit: type=1103 audit(1761958000.941:440): pid=5109 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.954607 kernel: audit: type=1006 audit(1761958000.941:441): pid=5109 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:46:40.954696 kernel: audit: type=1300 audit(1761958000.941:441): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb233a720 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:40.941000 audit[5109]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb233a720 a2=3 a3=0 items=0 ppid=1 pid=5109 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:40.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:40.964079 kernel: audit: type=1327 audit(1761958000.941:441): proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:40.964152 kernel: audit: type=1105 audit(1761958000.952:442): pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.952000 audit[5109]: USER_START pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.953000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:40.977664 kernel: audit: type=1103 audit(1761958000.953:443): pid=5112 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:41.092620 sshd[5109]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:41.093000 audit[5109]: USER_END pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:41.094885 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:38342.service: Deactivated successfully. Nov 1 00:46:41.095953 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:46:41.097219 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:46:41.098217 systemd-logind[1290]: Removed session 10. Nov 1 00:46:41.115374 kernel: audit: type=1106 audit(1761958001.093:444): pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:41.115508 kernel: audit: type=1104 audit(1761958001.093:445): pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:41.093000 audit[5109]: CRED_DISP pid=5109 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:41.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.128:22-10.0.0.1:38342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:41.269569 env[1313]: time="2025-11-01T00:46:41.269526589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:46:41.589514 env[1313]: time="2025-11-01T00:46:41.589331769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:41.631335 env[1313]: time="2025-11-01T00:46:41.631252241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:46:41.631605 kubelet[2110]: E1101 00:46:41.631546 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:41.631654 kubelet[2110]: E1101 00:46:41.631611 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:46:41.631814 kubelet[2110]: E1101 00:46:41.631752 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46898,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-cmrkr_calico-apiserver(704976ec-fdca-4788-bd96-1a541f0cf01c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:41.633041 kubelet[2110]: E1101 00:46:41.632969 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:42.270132 env[1313]: time="2025-11-01T00:46:42.270061138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:46:42.725194 env[1313]: time="2025-11-01T00:46:42.725088466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:42.727509 env[1313]: time="2025-11-01T00:46:42.727404001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:46:42.727754 kubelet[2110]: E1101 00:46:42.727704 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:42.728082 kubelet[2110]: E1101 00:46:42.727773 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:46:42.728082 kubelet[2110]: E1101 00:46:42.727940 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:42.730479 env[1313]: time="2025-11-01T00:46:42.730418676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:46:43.038600 env[1313]: time="2025-11-01T00:46:43.038402941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:43.040034 env[1313]: time="2025-11-01T00:46:43.039965839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:46:43.040362 kubelet[2110]: E1101 00:46:43.040268 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:43.040362 kubelet[2110]: E1101 00:46:43.040336 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:46:43.040495 kubelet[2110]: E1101 00:46:43.040465 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:43.041716 kubelet[2110]: E1101 00:46:43.041654 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:43.270185 env[1313]: time="2025-11-01T00:46:43.270127149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:46:43.753505 env[1313]: time="2025-11-01T00:46:43.753413347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:43.839286 env[1313]: time="2025-11-01T00:46:43.839121219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:46:43.839579 kubelet[2110]: E1101 00:46:43.839525 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:43.839865 kubelet[2110]: E1101 00:46:43.839586 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:46:43.839865 kubelet[2110]: E1101 00:46:43.839773 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bbnwx_calico-system(5fbcbf90-e90d-4d2e-bb2c-68aa5206a338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:43.840946 kubelet[2110]: E1101 00:46:43.840905 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:44.270679 env[1313]: time="2025-11-01T00:46:44.270624967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:46:44.639442 env[1313]: time="2025-11-01T00:46:44.639258948Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:46:44.646171 env[1313]: time="2025-11-01T00:46:44.646090959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:46:44.646402 kubelet[2110]: E1101 00:46:44.646364 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:44.646472 kubelet[2110]: E1101 00:46:44.646415 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:46:44.646681 kubelet[2110]: E1101 00:46:44.646567 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mfw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcb6947d5-ljzpr_calico-system(6fe081ef-ff27-4230-8865-b572345e2224): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:46:44.647862 kubelet[2110]: E1101 00:46:44.647815 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:46.095883 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:38356.service. Nov 1 00:46:46.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:38356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:46.097949 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:46:46.097999 kernel: audit: type=1130 audit(1761958006.094:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:38356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:46.126000 audit[5132]: USER_ACCT pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.127654 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 38356 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:46.129542 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:46.133735 systemd-logind[1290]: New session 11 of user core. Nov 1 00:46:46.133941 systemd[1]: Started session-11.scope. Nov 1 00:46:46.127000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.142796 kernel: audit: type=1101 audit(1761958006.126:448): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.142911 kernel: audit: type=1103 audit(1761958006.127:449): pid=5132 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.142948 kernel: audit: type=1006 audit(1761958006.127:450): pid=5132 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Nov 1 00:46:46.127000 audit[5132]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd6360360 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:46.154648 kernel: audit: type=1300 audit(1761958006.127:450): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd6360360 a2=3 a3=0 items=0 ppid=1 pid=5132 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:46.154717 kernel: audit: type=1327 audit(1761958006.127:450): proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:46.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:46.137000 audit[5132]: USER_START pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.165969 kernel: audit: type=1105 audit(1761958006.137:451): pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.166035 kernel: audit: type=1103 audit(1761958006.139:452): pid=5135 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.139000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.272369 sshd[5132]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:46.272000 audit[5132]: USER_END pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.275108 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:38356.service: Deactivated successfully. Nov 1 00:46:46.276368 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:46:46.277656 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:46:46.278748 systemd-logind[1290]: Removed session 11. Nov 1 00:46:46.272000 audit[5132]: CRED_DISP pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.286612 kernel: audit: type=1106 audit(1761958006.272:453): pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.286660 kernel: audit: type=1104 audit(1761958006.272:454): pid=5132 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:46.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.128:22-10.0.0.1:38356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.271270 kubelet[2110]: E1101 00:46:51.271222 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:46:51.276503 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:58342.service. Nov 1 00:46:51.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.128:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.278583 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:46:51.278639 kernel: audit: type=1130 audit(1761958011.275:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.128:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.307000 audit[5150]: USER_ACCT pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.309275 sshd[5150]: Accepted publickey for core from 10.0.0.1 port 58342 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:51.315994 sshd[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:51.314000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.320461 systemd-logind[1290]: New session 12 of user core. Nov 1 00:46:51.321206 systemd[1]: Started session-12.scope. Nov 1 00:46:51.322659 kernel: audit: type=1101 audit(1761958011.307:457): pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.322736 kernel: audit: type=1103 audit(1761958011.314:458): pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.322754 kernel: audit: type=1006 audit(1761958011.314:459): pid=5150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 00:46:51.327458 kernel: audit: type=1300 audit(1761958011.314:459): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe78ca1100 a2=3 a3=0 items=0 ppid=1 pid=5150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:51.314000 audit[5150]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe78ca1100 a2=3 a3=0 items=0 ppid=1 pid=5150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:51.334812 kernel: audit: type=1327 audit(1761958011.314:459): proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:51.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:51.337082 kernel: audit: type=1105 audit(1761958011.326:460): pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.326000 audit[5150]: USER_START pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.344254 kernel: audit: type=1103 audit(1761958011.327:461): pid=5153 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.327000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.438057 sshd[5150]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:51.437000 audit[5150]: USER_END pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.440683 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:58346.service. Nov 1 00:46:51.438000 audit[5150]: CRED_DISP pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.441132 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:58342.service: Deactivated successfully. Nov 1 00:46:51.441838 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:46:51.445629 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:46:51.446503 systemd-logind[1290]: Removed session 12. Nov 1 00:46:51.454383 kernel: audit: type=1106 audit(1761958011.437:462): pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.454488 kernel: audit: type=1104 audit(1761958011.438:463): pid=5150 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.128:22-10.0.0.1:58346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.128:22-10.0.0.1:58342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.476000 audit[5163]: USER_ACCT pid=5163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.477623 sshd[5163]: Accepted publickey for core from 10.0.0.1 port 58346 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:51.477000 audit[5163]: CRED_ACQ pid=5163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.477000 audit[5163]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0b49e180 a2=3 a3=0 items=0 ppid=1 pid=5163 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:51.477000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:51.478672 sshd[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:51.482026 systemd-logind[1290]: New session 13 of user core. Nov 1 00:46:51.482768 systemd[1]: Started session-13.scope. Nov 1 00:46:51.486000 audit[5163]: USER_START pid=5163 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.487000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.629324 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:58348.service. Nov 1 00:46:51.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.128:22-10.0.0.1:58348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.629941 sshd[5163]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:51.629000 audit[5163]: USER_END pid=5163 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.630000 audit[5163]: CRED_DISP pid=5163 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.632930 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:58346.service: Deactivated successfully. Nov 1 00:46:51.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.128:22-10.0.0.1:58346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.633759 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:46:51.634523 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:46:51.635534 systemd-logind[1290]: Removed session 13. Nov 1 00:46:51.668000 audit[5176]: USER_ACCT pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.669980 sshd[5176]: Accepted publickey for core from 10.0.0.1 port 58348 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:51.669000 audit[5176]: CRED_ACQ pid=5176 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.669000 audit[5176]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0d16da40 a2=3 a3=0 items=0 ppid=1 pid=5176 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:51.669000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:51.671218 sshd[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:51.675647 systemd-logind[1290]: New session 14 of user core. Nov 1 00:46:51.676716 systemd[1]: Started session-14.scope. Nov 1 00:46:51.681000 audit[5176]: USER_START pid=5176 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.682000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.791601 sshd[5176]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:51.791000 audit[5176]: USER_END pid=5176 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.791000 audit[5176]: CRED_DISP pid=5176 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:51.793602 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:58348.service: Deactivated successfully. Nov 1 00:46:51.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.128:22-10.0.0.1:58348 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:51.794397 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:46:51.795087 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:46:51.795909 systemd-logind[1290]: Removed session 14. Nov 1 00:46:52.271240 kubelet[2110]: E1101 00:46:52.271077 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:46:53.967266 kubelet[2110]: E1101 00:46:53.967235 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:55.269749 kubelet[2110]: E1101 00:46:55.269687 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:46:56.269799 kubelet[2110]: E1101 00:46:56.269763 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:46:56.795558 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:58360.service. Nov 1 00:46:56.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:58360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:56.797551 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:46:56.797706 kernel: audit: type=1130 audit(1761958016.795:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:58360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:56.831000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.831787 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 58360 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:46:56.833990 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:46:56.838452 kernel: audit: type=1101 audit(1761958016.831:484): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.838521 kernel: audit: type=1103 audit(1761958016.833:485): pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.833000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.837981 systemd-logind[1290]: New session 15 of user core. Nov 1 00:46:56.838315 systemd[1]: Started session-15.scope. Nov 1 00:46:56.861741 kernel: audit: type=1006 audit(1761958016.833:486): pid=5214 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:46:56.861816 kernel: audit: type=1300 audit(1761958016.833:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6b2d7e70 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:56.861835 kernel: audit: type=1327 audit(1761958016.833:486): proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:56.861851 kernel: audit: type=1105 audit(1761958016.843:487): pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.833000 audit[5214]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6b2d7e70 a2=3 a3=0 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:46:56.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:46:56.843000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.844000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.871091 kernel: audit: type=1103 audit(1761958016.844:488): pid=5217 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.965746 sshd[5214]: pam_unix(sshd:session): session closed for user core Nov 1 00:46:56.966000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.968368 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:58360.service: Deactivated successfully. Nov 1 00:46:56.969391 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:46:56.969440 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:46:56.970336 systemd-logind[1290]: Removed session 15. Nov 1 00:46:56.966000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:57.004026 kernel: audit: type=1106 audit(1761958016.966:489): pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:57.004175 kernel: audit: type=1104 audit(1761958016.966:490): pid=5214 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:46:56.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.128:22-10.0.0.1:58360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:46:57.272952 kubelet[2110]: E1101 00:46:57.272405 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:46:58.269678 kubelet[2110]: E1101 00:46:58.269635 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:46:58.269934 kubelet[2110]: E1101 00:46:58.269635 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:46:59.269660 kubelet[2110]: E1101 00:46:59.269527 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:01.969150 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:39822.service. Nov 1 00:47:01.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:01.971381 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:01.971456 kernel: audit: type=1130 audit(1761958021.968:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:02.001000 audit[5232]: USER_ACCT pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.002697 sshd[5232]: Accepted publickey for core from 10.0.0.1 port 39822 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:02.007686 sshd[5232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:02.006000 audit[5232]: CRED_ACQ pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.012643 systemd-logind[1290]: New session 16 of user core. Nov 1 00:47:02.013355 systemd[1]: Started session-16.scope. Nov 1 00:47:02.016919 kernel: audit: type=1101 audit(1761958022.001:493): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.017002 kernel: audit: type=1103 audit(1761958022.006:494): pid=5232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.021187 kernel: audit: type=1006 audit(1761958022.006:495): pid=5232 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:47:02.021285 kernel: audit: type=1300 audit(1761958022.006:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc824e78e0 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:02.006000 audit[5232]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc824e78e0 a2=3 a3=0 items=0 ppid=1 pid=5232 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:02.006000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:02.031261 kernel: audit: type=1327 audit(1761958022.006:495): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:02.031358 kernel: audit: type=1105 audit(1761958022.018:496): pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.018000 audit[5232]: USER_START pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.038452 kernel: audit: type=1103 audit(1761958022.019:497): pid=5235 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.019000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.164423 sshd[5232]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:02.165000 audit[5232]: USER_END pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.167100 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:39822.service: Deactivated successfully. Nov 1 00:47:02.168225 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:47:02.169089 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:47:02.170040 systemd-logind[1290]: Removed session 16. Nov 1 00:47:02.165000 audit[5232]: CRED_DISP pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.180506 kernel: audit: type=1106 audit(1761958022.165:498): pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.180568 kernel: audit: type=1104 audit(1761958022.165:499): pid=5232 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:02.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.128:22-10.0.0.1:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:03.270405 env[1313]: time="2025-11-01T00:47:03.270301364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:47:03.611591 env[1313]: time="2025-11-01T00:47:03.611411720Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:03.651915 env[1313]: time="2025-11-01T00:47:03.651819507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:47:03.652124 kubelet[2110]: E1101 00:47:03.652080 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:47:03.652456 kubelet[2110]: E1101 00:47:03.652132 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:47:03.652456 kubelet[2110]: E1101 00:47:03.652244 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fa950cf830a346e59fecb654697ba8aa,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:03.654274 env[1313]: time="2025-11-01T00:47:03.654235186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:47:04.003829 env[1313]: time="2025-11-01T00:47:04.003775934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:04.057507 env[1313]: time="2025-11-01T00:47:04.057418360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:47:04.057857 kubelet[2110]: E1101 00:47:04.057796 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:47:04.057923 kubelet[2110]: E1101 00:47:04.057868 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:47:04.058068 kubelet[2110]: E1101 00:47:04.058024 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g4bvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-547545f98f-bqwf6_calico-system(60293e01-1e82-445c-9d51-cf8544191dce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:04.059441 kubelet[2110]: E1101 00:47:04.059387 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:47:05.270894 env[1313]: time="2025-11-01T00:47:05.270818507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:47:05.583099 env[1313]: time="2025-11-01T00:47:05.582916206Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:05.599702 env[1313]: time="2025-11-01T00:47:05.599605592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:47:05.599949 kubelet[2110]: E1101 00:47:05.599904 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:05.600275 kubelet[2110]: E1101 00:47:05.599969 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:05.600275 kubelet[2110]: E1101 00:47:05.600112 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46898,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-cmrkr_calico-apiserver(704976ec-fdca-4788-bd96-1a541f0cf01c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:05.601458 kubelet[2110]: E1101 00:47:05.601397 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:47:07.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:07.168727 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:39832.service. Nov 1 00:47:07.191323 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:07.191468 kernel: audit: type=1130 audit(1761958027.167:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:07.269795 kubelet[2110]: E1101 00:47:07.268923 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:07.270276 env[1313]: time="2025-11-01T00:47:07.269681423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:47:07.296000 audit[5252]: USER_ACCT pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.298502 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 39832 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:07.300895 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:07.305889 systemd[1]: Started session-17.scope. Nov 1 00:47:07.306063 systemd-logind[1290]: New session 17 of user core. Nov 1 00:47:07.307380 kernel: audit: type=1101 audit(1761958027.296:502): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.307440 kernel: audit: type=1103 audit(1761958027.299:503): pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.299000 audit[5252]: CRED_ACQ pid=5252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.340860 kernel: audit: type=1006 audit(1761958027.299:504): pid=5252 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Nov 1 00:47:07.299000 audit[5252]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff937a8380 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:07.349621 kernel: audit: type=1300 audit(1761958027.299:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff937a8380 a2=3 a3=0 items=0 ppid=1 pid=5252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:07.299000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:07.352578 kernel: audit: type=1327 audit(1761958027.299:504): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:07.352623 kernel: audit: type=1105 audit(1761958027.311:505): pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.311000 audit[5252]: USER_START pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.313000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.369144 kernel: audit: type=1103 audit(1761958027.313:506): pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.539897 sshd[5252]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:07.539000 audit[5252]: USER_END pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.542847 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:39832.service: Deactivated successfully. Nov 1 00:47:07.564959 kernel: audit: type=1106 audit(1761958027.539:507): pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.565025 kernel: audit: type=1104 audit(1761958027.539:508): pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.539000 audit[5252]: CRED_DISP pid=5252 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:07.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.128:22-10.0.0.1:39832 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:07.544027 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:47:07.544469 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:47:07.545271 systemd-logind[1290]: Removed session 17. Nov 1 00:47:07.651786 env[1313]: time="2025-11-01T00:47:07.651730197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:07.755719 env[1313]: time="2025-11-01T00:47:07.755614767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:47:07.755988 kubelet[2110]: E1101 00:47:07.755935 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:07.756065 kubelet[2110]: E1101 00:47:07.755989 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:47:07.756149 kubelet[2110]: E1101 00:47:07.756110 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4n2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7c858d548c-69qzg_calico-apiserver(ffdf82e5-9850-41df-9576-1cf8a00ef8fd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:07.757270 kubelet[2110]: E1101 00:47:07.757243 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:47:08.271581 kubelet[2110]: E1101 00:47:08.271527 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:08.272184 env[1313]: time="2025-11-01T00:47:08.272144422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:47:08.602650 env[1313]: time="2025-11-01T00:47:08.602482969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:08.603755 env[1313]: time="2025-11-01T00:47:08.603711868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:47:08.603990 kubelet[2110]: E1101 00:47:08.603931 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:47:08.604082 kubelet[2110]: E1101 00:47:08.603995 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:47:08.604227 kubelet[2110]: E1101 00:47:08.604142 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:08.605970 env[1313]: time="2025-11-01T00:47:08.605945191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:47:08.910597 env[1313]: time="2025-11-01T00:47:08.910134556Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:08.998375 env[1313]: time="2025-11-01T00:47:08.998253560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:47:08.998677 kubelet[2110]: E1101 00:47:08.998625 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:47:08.998738 kubelet[2110]: E1101 00:47:08.998691 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:47:08.998899 kubelet[2110]: E1101 00:47:08.998844 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lq9vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zk5w7_calico-system(323323dc-c361-4116-a022-8e5f45430869): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:09.000124 kubelet[2110]: E1101 00:47:09.000066 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:47:11.269642 env[1313]: time="2025-11-01T00:47:11.269584064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:47:11.620596 env[1313]: time="2025-11-01T00:47:11.620440331Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:11.644698 env[1313]: time="2025-11-01T00:47:11.644620526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:47:11.645140 kubelet[2110]: E1101 00:47:11.645095 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:47:11.645437 kubelet[2110]: E1101 00:47:11.645149 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:47:11.645437 kubelet[2110]: E1101 00:47:11.645287 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrxgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-bbnwx_calico-system(5fbcbf90-e90d-4d2e-bb2c-68aa5206a338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:11.647298 kubelet[2110]: E1101 00:47:11.647258 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:47:12.544335 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:54232.service. Nov 1 00:47:12.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:54232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:12.546609 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:12.546675 kernel: audit: type=1130 audit(1761958032.543:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:54232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:12.583000 audit[5269]: USER_ACCT pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.584965 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 54232 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:12.586845 sshd[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:12.584000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.595761 systemd[1]: Started session-18.scope. Nov 1 00:47:12.596092 systemd-logind[1290]: New session 18 of user core. Nov 1 00:47:12.599602 kernel: audit: type=1101 audit(1761958032.583:511): pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.599713 kernel: audit: type=1103 audit(1761958032.584:512): pid=5269 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.605272 kernel: audit: type=1006 audit(1761958032.584:513): pid=5269 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 00:47:12.605417 kernel: audit: type=1300 audit(1761958032.584:513): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0deccfa0 a2=3 a3=0 items=0 ppid=1 pid=5269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:12.584000 audit[5269]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0deccfa0 a2=3 a3=0 items=0 ppid=1 pid=5269 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:12.584000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:12.618121 kernel: audit: type=1327 audit(1761958032.584:513): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:12.618284 kernel: audit: type=1105 audit(1761958032.603:514): pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.603000 audit[5269]: USER_START pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.605000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.634081 kernel: audit: type=1103 audit(1761958032.605:515): pid=5272 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.768234 sshd[5269]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:12.768000 audit[5269]: USER_END pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.771307 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:54236.service. Nov 1 00:47:12.772824 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:54232.service: Deactivated successfully. Nov 1 00:47:12.774251 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:47:12.774303 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:47:12.775197 systemd-logind[1290]: Removed session 18. Nov 1 00:47:12.769000 audit[5269]: CRED_DISP pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.791121 kernel: audit: type=1106 audit(1761958032.768:516): pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.791313 kernel: audit: type=1104 audit(1761958032.769:517): pid=5269 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.128:22-10.0.0.1:54236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:12.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.128:22-10.0.0.1:54232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:12.830000 audit[5284]: USER_ACCT pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.832606 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 54236 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:12.832000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.832000 audit[5284]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5cb1ff00 a2=3 a3=0 items=0 ppid=1 pid=5284 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:12.832000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:12.835039 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:12.840608 systemd-logind[1290]: New session 19 of user core. Nov 1 00:47:12.841787 systemd[1]: Started session-19.scope. Nov 1 00:47:12.853000 audit[5284]: USER_START pid=5284 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:12.857000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.269084 kubelet[2110]: E1101 00:47:13.269050 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:13.269767 env[1313]: time="2025-11-01T00:47:13.269738757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:47:13.621119 env[1313]: time="2025-11-01T00:47:13.620957253Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:47:13.658185 env[1313]: time="2025-11-01T00:47:13.658117263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:47:13.658407 kubelet[2110]: E1101 00:47:13.658363 2110 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:47:13.658511 kubelet[2110]: E1101 00:47:13.658420 2110 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:47:13.658646 kubelet[2110]: E1101 00:47:13.658585 2110 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4mfw7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5dcb6947d5-ljzpr_calico-system(6fe081ef-ff27-4230-8865-b572345e2224): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:47:13.659734 kubelet[2110]: E1101 00:47:13.659702 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:47:13.694252 sshd[5284]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:13.694000 audit[5284]: USER_END pid=5284 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.694000 audit[5284]: CRED_DISP pid=5284 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.128:22-10.0.0.1:54252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:13.696560 systemd[1]: Started sshd@19-10.0.0.128:22-10.0.0.1:54252.service. Nov 1 00:47:13.697566 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:54236.service: Deactivated successfully. Nov 1 00:47:13.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.128:22-10.0.0.1:54236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:13.698675 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:47:13.699276 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:47:13.700418 systemd-logind[1290]: Removed session 19. Nov 1 00:47:13.726000 audit[5296]: USER_ACCT pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.727561 sshd[5296]: Accepted publickey for core from 10.0.0.1 port 54252 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:13.726000 audit[5296]: CRED_ACQ pid=5296 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.726000 audit[5296]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe211889d0 a2=3 a3=0 items=0 ppid=1 pid=5296 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:13.726000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:13.728482 sshd[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:13.731506 systemd-logind[1290]: New session 20 of user core. Nov 1 00:47:13.732244 systemd[1]: Started session-20.scope. Nov 1 00:47:13.734000 audit[5296]: USER_START pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:13.735000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.604000 audit[5313]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:14.604000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcd5235b90 a2=0 a3=7ffcd5235b7c items=0 ppid=2264 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.604000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:14.608000 audit[5313]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:14.608000 audit[5313]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcd5235b90 a2=0 a3=0 items=0 ppid=2264 pid=5313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.608000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:14.617397 sshd[5296]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:14.620061 systemd[1]: Started sshd@20-10.0.0.128:22-10.0.0.1:54256.service. Nov 1 00:47:14.618000 audit[5296]: USER_END pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.618000 audit[5296]: CRED_DISP pid=5296 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.128:22-10.0.0.1:54256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:14.621338 systemd[1]: sshd@19-10.0.0.128:22-10.0.0.1:54252.service: Deactivated successfully. Nov 1 00:47:14.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.128:22-10.0.0.1:54252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:14.622810 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:47:14.623371 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:47:14.624386 systemd-logind[1290]: Removed session 20. Nov 1 00:47:14.628000 audit[5318]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:14.628000 audit[5318]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe025f2010 a2=0 a3=7ffe025f1ffc items=0 ppid=2264 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:14.633000 audit[5318]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:14.633000 audit[5318]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe025f2010 a2=0 a3=0 items=0 ppid=2264 pid=5318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:14.666000 audit[5314]: USER_ACCT pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.667939 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 54256 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:14.667000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.667000 audit[5314]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd916abd90 a2=3 a3=0 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.667000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:14.669494 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:14.673778 systemd-logind[1290]: New session 21 of user core. Nov 1 00:47:14.674673 systemd[1]: Started session-21.scope. Nov 1 00:47:14.678000 audit[5314]: USER_START pid=5314 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.679000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.933433 sshd[5314]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:14.933000 audit[5314]: USER_END pid=5314 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.933000 audit[5314]: CRED_DISP pid=5314 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.936387 systemd[1]: Started sshd@21-10.0.0.128:22-10.0.0.1:54260.service. Nov 1 00:47:14.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.128:22-10.0.0.1:54260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:14.936811 systemd[1]: sshd@20-10.0.0.128:22-10.0.0.1:54256.service: Deactivated successfully. Nov 1 00:47:14.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.128:22-10.0.0.1:54256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:14.941763 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:47:14.942176 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:47:14.943155 systemd-logind[1290]: Removed session 21. Nov 1 00:47:14.968000 audit[5329]: USER_ACCT pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.969648 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 54260 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:14.969000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.969000 audit[5329]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbe4a4170 a2=3 a3=0 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:14.969000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:14.970815 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:14.974620 systemd-logind[1290]: New session 22 of user core. Nov 1 00:47:14.975634 systemd[1]: Started session-22.scope. Nov 1 00:47:14.978000 audit[5329]: USER_START pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:14.979000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:15.088002 sshd[5329]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:15.087000 audit[5329]: USER_END pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:15.087000 audit[5329]: CRED_DISP pid=5329 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:15.090336 systemd[1]: sshd@21-10.0.0.128:22-10.0.0.1:54260.service: Deactivated successfully. Nov 1 00:47:15.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.128:22-10.0.0.1:54260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:15.091286 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:47:15.092235 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:47:15.092954 systemd-logind[1290]: Removed session 22. Nov 1 00:47:16.277761 kubelet[2110]: E1101 00:47:16.277705 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:47:17.269934 kubelet[2110]: E1101 00:47:17.269884 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:47:19.270059 kubelet[2110]: E1101 00:47:19.270013 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:47:20.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:44138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:20.091213 systemd[1]: Started sshd@22-10.0.0.128:22-10.0.0.1:44138.service. Nov 1 00:47:20.099368 kernel: kauditd_printk_skb: 57 callbacks suppressed Nov 1 00:47:20.099439 kernel: audit: type=1130 audit(1761958040.090:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:44138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:20.138000 audit[5344]: USER_ACCT pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.140056 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 44138 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:20.142450 sshd[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:20.140000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.146570 systemd-logind[1290]: New session 23 of user core. Nov 1 00:47:20.147709 systemd[1]: Started session-23.scope. Nov 1 00:47:20.152885 kernel: audit: type=1101 audit(1761958040.138:560): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.152949 kernel: audit: type=1103 audit(1761958040.140:561): pid=5344 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.152986 kernel: audit: type=1006 audit(1761958040.140:562): pid=5344 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 00:47:20.140000 audit[5344]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe064a4b60 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:20.204047 kernel: audit: type=1300 audit(1761958040.140:562): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe064a4b60 a2=3 a3=0 items=0 ppid=1 pid=5344 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:20.204206 kernel: audit: type=1327 audit(1761958040.140:562): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:20.140000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:20.152000 audit[5344]: USER_START pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.215230 kernel: audit: type=1105 audit(1761958040.152:563): pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.215325 kernel: audit: type=1103 audit(1761958040.154:564): pid=5347 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.154000 audit[5347]: CRED_ACQ pid=5347 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.608054 sshd[5344]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:20.607000 audit[5344]: USER_END pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.610135 systemd[1]: sshd@22-10.0.0.128:22-10.0.0.1:44138.service: Deactivated successfully. Nov 1 00:47:20.611288 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:47:20.611817 systemd-logind[1290]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:47:20.612585 systemd-logind[1290]: Removed session 23. Nov 1 00:47:20.607000 audit[5344]: CRED_DISP pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.627026 kernel: audit: type=1106 audit(1761958040.607:565): pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.627111 kernel: audit: type=1104 audit(1761958040.607:566): pid=5344 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:20.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.128:22-10.0.0.1:44138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:23.269692 kubelet[2110]: E1101 00:47:23.269646 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:47:25.610868 systemd[1]: Started sshd@23-10.0.0.128:22-10.0.0.1:44150.service. Nov 1 00:47:25.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:44150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:25.613149 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:25.613212 kernel: audit: type=1130 audit(1761958045.609:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:44150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:25.641000 audit[5380]: USER_ACCT pid=5380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.646570 sshd[5380]: Accepted publickey for core from 10.0.0.1 port 44150 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:25.650374 kernel: audit: type=1101 audit(1761958045.641:569): pid=5380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.650441 kernel: audit: type=1103 audit(1761958045.648:570): pid=5380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.648000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.650646 sshd[5380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:25.653867 systemd-logind[1290]: New session 24 of user core. Nov 1 00:47:25.654596 systemd[1]: Started session-24.scope. Nov 1 00:47:25.668721 kernel: audit: type=1006 audit(1761958045.649:571): pid=5380 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 00:47:25.668790 kernel: audit: type=1300 audit(1761958045.649:571): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc57b9f470 a2=3 a3=0 items=0 ppid=1 pid=5380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:25.649000 audit[5380]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc57b9f470 a2=3 a3=0 items=0 ppid=1 pid=5380 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:25.675720 kernel: audit: type=1327 audit(1761958045.649:571): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:25.649000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:25.678029 kernel: audit: type=1105 audit(1761958045.658:572): pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.658000 audit[5380]: USER_START pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.685315 kernel: audit: type=1103 audit(1761958045.659:573): pid=5383 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.659000 audit[5383]: CRED_ACQ pid=5383 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.765860 sshd[5380]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:25.765000 audit[5380]: USER_END pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.769278 systemd[1]: sshd@23-10.0.0.128:22-10.0.0.1:44150.service: Deactivated successfully. Nov 1 00:47:25.770035 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:47:25.770928 systemd-logind[1290]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:47:25.771597 systemd-logind[1290]: Removed session 24. Nov 1 00:47:25.765000 audit[5380]: CRED_DISP pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.780149 kernel: audit: type=1106 audit(1761958045.765:574): pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.780191 kernel: audit: type=1104 audit(1761958045.765:575): pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:25.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.128:22-10.0.0.1:44150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:27.270441 kubelet[2110]: E1101 00:47:27.270392 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:47:28.067000 audit[5396]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:28.067000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd23fd4450 a2=0 a3=7ffd23fd443c items=0 ppid=2264 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:28.067000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:28.076000 audit[5396]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:47:28.076000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd23fd4450 a2=0 a3=7ffd23fd443c items=0 ppid=2264 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:28.076000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:47:28.269969 kubelet[2110]: E1101 00:47:28.269917 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:47:29.270061 kubelet[2110]: E1101 00:47:29.270002 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5dcb6947d5-ljzpr" podUID="6fe081ef-ff27-4230-8865-b572345e2224" Nov 1 00:47:30.768997 systemd[1]: Started sshd@24-10.0.0.128:22-10.0.0.1:54274.service. Nov 1 00:47:30.774567 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:47:30.774698 kernel: audit: type=1130 audit(1761958050.767:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:54274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:30.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:54274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:30.802000 audit[5397]: USER_ACCT pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.803703 sshd[5397]: Accepted publickey for core from 10.0.0.1 port 54274 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:30.804866 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:30.803000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.811992 systemd[1]: Started session-25.scope. Nov 1 00:47:30.812296 systemd-logind[1290]: New session 25 of user core. Nov 1 00:47:30.818237 kernel: audit: type=1101 audit(1761958050.802:580): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.818330 kernel: audit: type=1103 audit(1761958050.803:581): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.818374 kernel: audit: type=1006 audit(1761958050.803:582): pid=5397 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 00:47:30.803000 audit[5397]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaf0ced80 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:30.830278 kernel: audit: type=1300 audit(1761958050.803:582): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdaf0ced80 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:30.830328 kernel: audit: type=1327 audit(1761958050.803:582): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:30.803000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:30.816000 audit[5397]: USER_START pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.840994 kernel: audit: type=1105 audit(1761958050.816:583): pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.817000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.847919 kernel: audit: type=1103 audit(1761958050.817:584): pid=5400 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.922861 sshd[5397]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:30.922000 audit[5397]: USER_END pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.925257 systemd[1]: sshd@24-10.0.0.128:22-10.0.0.1:54274.service: Deactivated successfully. Nov 1 00:47:30.926037 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:47:30.929978 systemd-logind[1290]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:47:30.930665 systemd-logind[1290]: Removed session 25. Nov 1 00:47:30.922000 audit[5397]: CRED_DISP pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.939945 kernel: audit: type=1106 audit(1761958050.922:585): pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.939991 kernel: audit: type=1104 audit(1761958050.922:586): pid=5397 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:30.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.128:22-10.0.0.1:54274 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:31.269460 kubelet[2110]: E1101 00:47:31.269395 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:31.270577 kubelet[2110]: E1101 00:47:31.270533 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce" Nov 1 00:47:33.269922 kubelet[2110]: E1101 00:47:33.269878 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-69qzg" podUID="ffdf82e5-9850-41df-9576-1cf8a00ef8fd" Nov 1 00:47:35.270580 kubelet[2110]: E1101 00:47:35.270525 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zk5w7" podUID="323323dc-c361-4116-a022-8e5f45430869" Nov 1 00:47:35.926120 systemd[1]: Started sshd@25-10.0.0.128:22-10.0.0.1:54290.service. Nov 1 00:47:35.934399 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:35.934537 kernel: audit: type=1130 audit(1761958055.926:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.128:22-10.0.0.1:54290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:35.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.128:22-10.0.0.1:54290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:35.986000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:35.991451 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:35.998000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.009045 kernel: audit: type=1101 audit(1761958055.986:589): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.009108 kernel: audit: type=1103 audit(1761958055.998:590): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.009306 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:36.013180 systemd-logind[1290]: New session 26 of user core. Nov 1 00:47:36.013540 systemd[1]: Started session-26.scope. Nov 1 00:47:36.013976 kernel: audit: type=1006 audit(1761958055.998:591): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 00:47:35.998000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd670dc8c0 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:36.021549 kernel: audit: type=1300 audit(1761958055.998:591): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd670dc8c0 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:35.998000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:36.024389 kernel: audit: type=1327 audit(1761958055.998:591): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:36.024428 kernel: audit: type=1105 audit(1761958056.017:592): pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.017000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.018000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.039274 kernel: audit: type=1103 audit(1761958056.018:593): pid=5415 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.164303 sshd[5412]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:36.165000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.167426 systemd[1]: sshd@25-10.0.0.128:22-10.0.0.1:54290.service: Deactivated successfully. Nov 1 00:47:36.168651 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:47:36.169410 systemd-logind[1290]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:47:36.170739 systemd-logind[1290]: Removed session 26. Nov 1 00:47:36.165000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.181112 kernel: audit: type=1106 audit(1761958056.165:594): pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.181212 kernel: audit: type=1104 audit(1761958056.165:595): pid=5412 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:36.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.128:22-10.0.0.1:54290 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:39.269722 kubelet[2110]: E1101 00:47:39.269655 2110 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:47:39.270201 kubelet[2110]: E1101 00:47:39.270034 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7c858d548c-cmrkr" podUID="704976ec-fdca-4788-bd96-1a541f0cf01c" Nov 1 00:47:41.175398 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:47:41.175582 kernel: audit: type=1130 audit(1761958061.167:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.128:22-10.0.0.1:60690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:41.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.128:22-10.0.0.1:60690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:41.167760 systemd[1]: Started sshd@26-10.0.0.128:22-10.0.0.1:60690.service. Nov 1 00:47:41.215000 audit[5428]: USER_ACCT pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.215953 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 60690 ssh2: RSA SHA256:NQ/pL2fWYvQCjEeRqy6L6UmvNbztCIRYTBTHl6vxSTo Nov 1 00:47:41.217096 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:47:41.216000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.227204 systemd-logind[1290]: New session 27 of user core. Nov 1 00:47:41.227561 systemd[1]: Started session-27.scope. Nov 1 00:47:41.230508 kernel: audit: type=1101 audit(1761958061.215:598): pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.230638 kernel: audit: type=1103 audit(1761958061.216:599): pid=5428 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.230663 kernel: audit: type=1006 audit(1761958061.216:600): pid=5428 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Nov 1 00:47:41.216000 audit[5428]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc76c3b50 a2=3 a3=0 items=0 ppid=1 pid=5428 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:41.242400 kernel: audit: type=1300 audit(1761958061.216:600): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc76c3b50 a2=3 a3=0 items=0 ppid=1 pid=5428 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:47:41.242443 kernel: audit: type=1327 audit(1761958061.216:600): proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:41.216000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:47:41.232000 audit[5428]: USER_START pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.253221 kernel: audit: type=1105 audit(1761958061.232:601): pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.253267 kernel: audit: type=1103 audit(1761958061.233:602): pid=5431 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.233000 audit[5431]: CRED_ACQ pid=5431 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.270167 kubelet[2110]: E1101 00:47:41.270123 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-bbnwx" podUID="5fbcbf90-e90d-4d2e-bb2c-68aa5206a338" Nov 1 00:47:41.416698 sshd[5428]: pam_unix(sshd:session): session closed for user core Nov 1 00:47:41.417000 audit[5428]: USER_END pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.426281 systemd[1]: sshd@26-10.0.0.128:22-10.0.0.1:60690.service: Deactivated successfully. Nov 1 00:47:41.427636 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:47:41.428107 systemd-logind[1290]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:47:41.429085 systemd-logind[1290]: Removed session 27. Nov 1 00:47:41.417000 audit[5428]: CRED_DISP pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.436722 kernel: audit: type=1106 audit(1761958061.417:603): pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.436877 kernel: audit: type=1104 audit(1761958061.417:604): pid=5428 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Nov 1 00:47:41.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.128:22-10.0.0.1:60690 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:47:42.272128 kubelet[2110]: E1101 00:47:42.272070 2110 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-547545f98f-bqwf6" podUID="60293e01-1e82-445c-9d51-cf8544191dce"