Oct 31 01:13:32.074308 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Oct 30 23:32:41 -00 2025 Oct 31 01:13:32.074333 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:13:32.074342 kernel: BIOS-provided physical RAM map: Oct 31 01:13:32.074347 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 01:13:32.074353 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 01:13:32.074358 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 01:13:32.074365 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 31 01:13:32.074371 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 31 01:13:32.074378 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 01:13:32.074383 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 01:13:32.074389 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 01:13:32.074395 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 01:13:32.074400 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 01:13:32.074406 kernel: NX (Execute Disable) protection: active Oct 31 01:13:32.074414 kernel: SMBIOS 2.8 present. Oct 31 01:13:32.074421 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 31 01:13:32.074427 kernel: Hypervisor detected: KVM Oct 31 01:13:32.074433 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 01:13:32.074441 kernel: kvm-clock: cpu 0, msr 5c1a0001, primary cpu clock Oct 31 01:13:32.074447 kernel: kvm-clock: using sched offset of 3467878841 cycles Oct 31 01:13:32.074454 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 01:13:32.074460 kernel: tsc: Detected 2794.748 MHz processor Oct 31 01:13:32.074467 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 01:13:32.074475 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 01:13:32.074481 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 31 01:13:32.074487 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 01:13:32.074493 kernel: Using GB pages for direct mapping Oct 31 01:13:32.074499 kernel: ACPI: Early table checksum verification disabled Oct 31 01:13:32.074506 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 31 01:13:32.074512 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074518 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074524 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074532 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 31 01:13:32.074538 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074544 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074550 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074557 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:13:32.074563 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 31 01:13:32.074569 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 31 01:13:32.074575 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 31 01:13:32.074585 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 31 01:13:32.074592 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 31 01:13:32.074598 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 31 01:13:32.074623 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 31 01:13:32.074630 kernel: No NUMA configuration found Oct 31 01:13:32.074636 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 31 01:13:32.074644 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Oct 31 01:13:32.074651 kernel: Zone ranges: Oct 31 01:13:32.074658 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 01:13:32.074664 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 31 01:13:32.074671 kernel: Normal empty Oct 31 01:13:32.074677 kernel: Movable zone start for each node Oct 31 01:13:32.074684 kernel: Early memory node ranges Oct 31 01:13:32.074691 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 01:13:32.074697 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 31 01:13:32.074705 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 31 01:13:32.074714 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:13:32.074720 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 01:13:32.074727 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 31 01:13:32.074734 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 01:13:32.074741 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 01:13:32.074747 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 01:13:32.074754 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 01:13:32.074760 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 01:13:32.074767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 01:13:32.074777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 01:13:32.074784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 01:13:32.074790 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 01:13:32.074797 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 01:13:32.074804 kernel: TSC deadline timer available Oct 31 01:13:32.074811 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Oct 31 01:13:32.074817 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 01:13:32.074824 kernel: kvm-guest: setup PV sched yield Oct 31 01:13:32.074830 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 01:13:32.074838 kernel: Booting paravirtualized kernel on KVM Oct 31 01:13:32.074845 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 01:13:32.074852 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Oct 31 01:13:32.074858 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Oct 31 01:13:32.074865 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Oct 31 01:13:32.074871 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 01:13:32.074878 kernel: kvm-guest: setup async PF for cpu 0 Oct 31 01:13:32.074884 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Oct 31 01:13:32.074891 kernel: kvm-guest: PV spinlocks enabled Oct 31 01:13:32.074899 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 01:13:32.074906 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Oct 31 01:13:32.074913 kernel: Policy zone: DMA32 Oct 31 01:13:32.074920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:13:32.074927 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 01:13:32.074934 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 01:13:32.074941 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 01:13:32.074947 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 01:13:32.074956 kernel: Memory: 2436696K/2571752K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 134796K reserved, 0K cma-reserved) Oct 31 01:13:32.074963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 01:13:32.074969 kernel: ftrace: allocating 34614 entries in 136 pages Oct 31 01:13:32.074976 kernel: ftrace: allocated 136 pages with 2 groups Oct 31 01:13:32.074982 kernel: rcu: Hierarchical RCU implementation. Oct 31 01:13:32.074989 kernel: rcu: RCU event tracing is enabled. Oct 31 01:13:32.074996 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 01:13:32.075003 kernel: Rude variant of Tasks RCU enabled. Oct 31 01:13:32.075010 kernel: Tracing variant of Tasks RCU enabled. Oct 31 01:13:32.075018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 01:13:32.075024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 01:13:32.075031 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 01:13:32.075037 kernel: random: crng init done Oct 31 01:13:32.075044 kernel: Console: colour VGA+ 80x25 Oct 31 01:13:32.075050 kernel: printk: console [ttyS0] enabled Oct 31 01:13:32.075057 kernel: ACPI: Core revision 20210730 Oct 31 01:13:32.075064 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 01:13:32.075070 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 01:13:32.075078 kernel: x2apic enabled Oct 31 01:13:32.075085 kernel: Switched APIC routing to physical x2apic. Oct 31 01:13:32.075094 kernel: kvm-guest: setup PV IPIs Oct 31 01:13:32.075100 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 01:13:32.075107 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 31 01:13:32.075116 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 01:13:32.075123 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 01:13:32.075129 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 01:13:32.075136 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 01:13:32.075149 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 01:13:32.075156 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 01:13:32.075172 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 01:13:32.075180 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 01:13:32.075186 kernel: active return thunk: retbleed_return_thunk Oct 31 01:13:32.075193 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 01:13:32.075200 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 01:13:32.075207 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 31 01:13:32.075215 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 01:13:32.075223 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 01:13:32.075230 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 01:13:32.075237 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 01:13:32.075244 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 31 01:13:32.075251 kernel: Freeing SMP alternatives memory: 32K Oct 31 01:13:32.075258 kernel: pid_max: default: 32768 minimum: 301 Oct 31 01:13:32.075264 kernel: LSM: Security Framework initializing Oct 31 01:13:32.075272 kernel: SELinux: Initializing. Oct 31 01:13:32.075279 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 01:13:32.075286 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 01:13:32.075293 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 01:13:32.075300 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 01:13:32.075307 kernel: ... version: 0 Oct 31 01:13:32.075314 kernel: ... bit width: 48 Oct 31 01:13:32.075321 kernel: ... generic registers: 6 Oct 31 01:13:32.075328 kernel: ... value mask: 0000ffffffffffff Oct 31 01:13:32.075337 kernel: ... max period: 00007fffffffffff Oct 31 01:13:32.075343 kernel: ... fixed-purpose events: 0 Oct 31 01:13:32.075350 kernel: ... event mask: 000000000000003f Oct 31 01:13:32.075357 kernel: signal: max sigframe size: 1776 Oct 31 01:13:32.075364 kernel: rcu: Hierarchical SRCU implementation. Oct 31 01:13:32.075371 kernel: smp: Bringing up secondary CPUs ... Oct 31 01:13:32.075378 kernel: x86: Booting SMP configuration: Oct 31 01:13:32.075385 kernel: .... node #0, CPUs: #1 Oct 31 01:13:32.075391 kernel: kvm-clock: cpu 1, msr 5c1a0041, secondary cpu clock Oct 31 01:13:32.075400 kernel: kvm-guest: setup async PF for cpu 1 Oct 31 01:13:32.075407 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Oct 31 01:13:32.075413 kernel: #2 Oct 31 01:13:32.075420 kernel: kvm-clock: cpu 2, msr 5c1a0081, secondary cpu clock Oct 31 01:13:32.075427 kernel: kvm-guest: setup async PF for cpu 2 Oct 31 01:13:32.075434 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Oct 31 01:13:32.075443 kernel: #3 Oct 31 01:13:32.075450 kernel: kvm-clock: cpu 3, msr 5c1a00c1, secondary cpu clock Oct 31 01:13:32.075457 kernel: kvm-guest: setup async PF for cpu 3 Oct 31 01:13:32.075464 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Oct 31 01:13:32.075472 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 01:13:32.075479 kernel: smpboot: Max logical packages: 1 Oct 31 01:13:32.075486 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 01:13:32.075493 kernel: devtmpfs: initialized Oct 31 01:13:32.075500 kernel: x86/mm: Memory block size: 128MB Oct 31 01:13:32.075507 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 01:13:32.075514 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 01:13:32.075521 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 01:13:32.075528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 01:13:32.075536 kernel: audit: initializing netlink subsys (disabled) Oct 31 01:13:32.075543 kernel: audit: type=2000 audit(1761873210.781:1): state=initialized audit_enabled=0 res=1 Oct 31 01:13:32.075550 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 01:13:32.075556 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 01:13:32.075563 kernel: cpuidle: using governor menu Oct 31 01:13:32.075570 kernel: ACPI: bus type PCI registered Oct 31 01:13:32.075577 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 01:13:32.075584 kernel: dca service started, version 1.12.1 Oct 31 01:13:32.075593 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 01:13:32.075601 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Oct 31 01:13:32.075618 kernel: PCI: Using configuration type 1 for base access Oct 31 01:13:32.075626 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 01:13:32.075633 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 01:13:32.075640 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 01:13:32.075647 kernel: ACPI: Added _OSI(Module Device) Oct 31 01:13:32.075653 kernel: ACPI: Added _OSI(Processor Device) Oct 31 01:13:32.075660 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 01:13:32.075667 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 01:13:32.075675 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 01:13:32.075682 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 01:13:32.075689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 01:13:32.075696 kernel: ACPI: Interpreter enabled Oct 31 01:13:32.075703 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 01:13:32.075710 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 01:13:32.075717 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 01:13:32.075724 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 01:13:32.075731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 01:13:32.075852 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 01:13:32.075931 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 01:13:32.076004 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 01:13:32.076014 kernel: PCI host bridge to bus 0000:00 Oct 31 01:13:32.076104 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 01:13:32.076185 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 01:13:32.076256 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 01:13:32.076323 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 31 01:13:32.076390 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 01:13:32.076455 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 31 01:13:32.076521 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 01:13:32.076618 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 01:13:32.076704 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Oct 31 01:13:32.076783 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Oct 31 01:13:32.076857 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Oct 31 01:13:32.076935 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Oct 31 01:13:32.077008 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 01:13:32.077089 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Oct 31 01:13:32.077170 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 01:13:32.077251 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Oct 31 01:13:32.077328 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Oct 31 01:13:32.077408 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Oct 31 01:13:32.077483 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 01:13:32.077555 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Oct 31 01:13:32.077641 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Oct 31 01:13:32.077723 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 31 01:13:32.077800 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Oct 31 01:13:32.077877 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Oct 31 01:13:32.077951 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 31 01:13:32.078024 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Oct 31 01:13:32.078104 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 01:13:32.078188 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 01:13:32.078283 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 01:13:32.078362 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Oct 31 01:13:32.078435 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Oct 31 01:13:32.078516 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 01:13:32.078588 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 01:13:32.078598 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 01:13:32.078616 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 01:13:32.078623 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 01:13:32.078633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 01:13:32.078643 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 01:13:32.078650 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 01:13:32.078657 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 01:13:32.078664 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 01:13:32.078671 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 01:13:32.078678 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 01:13:32.078685 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 01:13:32.078691 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 01:13:32.078698 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 01:13:32.078707 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 01:13:32.078714 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 01:13:32.078720 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 01:13:32.078727 kernel: iommu: Default domain type: Translated Oct 31 01:13:32.078734 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 01:13:32.078812 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 01:13:32.078887 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 01:13:32.078965 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 01:13:32.078977 kernel: vgaarb: loaded Oct 31 01:13:32.078984 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 01:13:32.078991 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 01:13:32.078998 kernel: PTP clock support registered Oct 31 01:13:32.079005 kernel: PCI: Using ACPI for IRQ routing Oct 31 01:13:32.079012 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 01:13:32.079019 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 01:13:32.079026 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 31 01:13:32.079033 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 01:13:32.079041 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 01:13:32.079048 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 01:13:32.079055 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 01:13:32.079062 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 01:13:32.079069 kernel: pnp: PnP ACPI init Oct 31 01:13:32.079150 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 01:13:32.079168 kernel: pnp: PnP ACPI: found 6 devices Oct 31 01:13:32.079176 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 01:13:32.079186 kernel: NET: Registered PF_INET protocol family Oct 31 01:13:32.079193 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 01:13:32.079200 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 01:13:32.079207 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 01:13:32.079214 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 01:13:32.079221 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 31 01:13:32.079228 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 01:13:32.079235 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 01:13:32.079242 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 01:13:32.079250 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 01:13:32.079257 kernel: NET: Registered PF_XDP protocol family Oct 31 01:13:32.082557 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 01:13:32.082678 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 01:13:32.082745 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 01:13:32.082808 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 31 01:13:32.082871 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 01:13:32.082934 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 31 01:13:32.082948 kernel: PCI: CLS 0 bytes, default 64 Oct 31 01:13:32.082955 kernel: Initialise system trusted keyrings Oct 31 01:13:32.082962 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 01:13:32.082969 kernel: Key type asymmetric registered Oct 31 01:13:32.082976 kernel: Asymmetric key parser 'x509' registered Oct 31 01:13:32.082984 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 01:13:32.082991 kernel: io scheduler mq-deadline registered Oct 31 01:13:32.082998 kernel: io scheduler kyber registered Oct 31 01:13:32.083005 kernel: io scheduler bfq registered Oct 31 01:13:32.083013 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 01:13:32.083021 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 01:13:32.083028 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 01:13:32.083035 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 01:13:32.083042 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 01:13:32.083049 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 01:13:32.083056 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 01:13:32.083063 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 01:13:32.083070 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 01:13:32.083078 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 01:13:32.083157 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 01:13:32.083248 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 01:13:32.083316 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T01:13:31 UTC (1761873211) Oct 31 01:13:32.083384 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 31 01:13:32.083393 kernel: NET: Registered PF_INET6 protocol family Oct 31 01:13:32.083400 kernel: Segment Routing with IPv6 Oct 31 01:13:32.083407 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 01:13:32.083417 kernel: NET: Registered PF_PACKET protocol family Oct 31 01:13:32.083424 kernel: Key type dns_resolver registered Oct 31 01:13:32.083431 kernel: IPI shorthand broadcast: enabled Oct 31 01:13:32.083438 kernel: sched_clock: Marking stable (674218674, 184692687)->(901143546, -42232185) Oct 31 01:13:32.083445 kernel: registered taskstats version 1 Oct 31 01:13:32.083452 kernel: Loading compiled-in X.509 certificates Oct 31 01:13:32.083460 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 8306d4e745b00e76b5fae2596c709096b7f28adc' Oct 31 01:13:32.083467 kernel: Key type .fscrypt registered Oct 31 01:13:32.083473 kernel: Key type fscrypt-provisioning registered Oct 31 01:13:32.083482 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 01:13:32.083489 kernel: ima: Allocated hash algorithm: sha1 Oct 31 01:13:32.083496 kernel: ima: No architecture policies found Oct 31 01:13:32.083503 kernel: clk: Disabling unused clocks Oct 31 01:13:32.083510 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 31 01:13:32.083517 kernel: Write protecting the kernel read-only data: 28672k Oct 31 01:13:32.083524 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 31 01:13:32.083531 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 31 01:13:32.083538 kernel: Run /init as init process Oct 31 01:13:32.083547 kernel: with arguments: Oct 31 01:13:32.083554 kernel: /init Oct 31 01:13:32.083560 kernel: with environment: Oct 31 01:13:32.083567 kernel: HOME=/ Oct 31 01:13:32.083574 kernel: TERM=linux Oct 31 01:13:32.083581 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 01:13:32.083595 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:13:32.083616 systemd[1]: Detected virtualization kvm. Oct 31 01:13:32.083626 systemd[1]: Detected architecture x86-64. Oct 31 01:13:32.083633 systemd[1]: Running in initrd. Oct 31 01:13:32.083641 systemd[1]: No hostname configured, using default hostname. Oct 31 01:13:32.083648 systemd[1]: Hostname set to . Oct 31 01:13:32.083656 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:13:32.083663 systemd[1]: Queued start job for default target initrd.target. Oct 31 01:13:32.083671 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:13:32.083678 systemd[1]: Reached target cryptsetup.target. Oct 31 01:13:32.083688 systemd[1]: Reached target paths.target. Oct 31 01:13:32.083695 systemd[1]: Reached target slices.target. Oct 31 01:13:32.083710 systemd[1]: Reached target swap.target. Oct 31 01:13:32.083718 systemd[1]: Reached target timers.target. Oct 31 01:13:32.083727 systemd[1]: Listening on iscsid.socket. Oct 31 01:13:32.083735 systemd[1]: Listening on iscsiuio.socket. Oct 31 01:13:32.083743 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:13:32.083751 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:13:32.083759 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:13:32.083766 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:13:32.083774 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:13:32.083782 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:13:32.083790 systemd[1]: Reached target sockets.target. Oct 31 01:13:32.083797 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:13:32.083806 systemd[1]: Finished network-cleanup.service. Oct 31 01:13:32.083814 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 01:13:32.083822 systemd[1]: Starting systemd-journald.service... Oct 31 01:13:32.083830 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:13:32.083838 systemd[1]: Starting systemd-resolved.service... Oct 31 01:13:32.083845 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 01:13:32.083853 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:13:32.083861 kernel: audit: type=1130 audit(1761873212.073:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.083869 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 01:13:32.083878 kernel: audit: type=1130 audit(1761873212.081:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.083891 systemd-journald[198]: Journal started Oct 31 01:13:32.083930 systemd-journald[198]: Runtime Journal (/run/log/journal/85770368ad6d438f96217ddea72334b2) is 6.0M, max 48.5M, 42.5M free. Oct 31 01:13:32.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.081009 systemd-modules-load[199]: Inserted module 'overlay' Oct 31 01:13:32.090232 systemd[1]: Started systemd-journald.service. Oct 31 01:13:32.100254 systemd-resolved[200]: Positive Trust Anchors: Oct 31 01:13:32.174575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 01:13:32.174624 kernel: Bridge firewalling registered Oct 31 01:13:32.174635 kernel: SCSI subsystem initialized Oct 31 01:13:32.174644 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 01:13:32.174653 kernel: device-mapper: uevent: version 1.0.3 Oct 31 01:13:32.174662 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 01:13:32.174671 kernel: audit: type=1130 audit(1761873212.166:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.100276 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:13:32.183046 kernel: audit: type=1130 audit(1761873212.175:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.100306 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:13:32.189632 kernel: audit: type=1130 audit(1761873212.183:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.102471 systemd-resolved[200]: Defaulting to hostname 'linux'. Oct 31 01:13:32.205552 kernel: audit: type=1130 audit(1761873212.189:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.120309 systemd-modules-load[199]: Inserted module 'br_netfilter' Oct 31 01:13:32.147905 systemd-modules-load[199]: Inserted module 'dm_multipath' Oct 31 01:13:32.167543 systemd[1]: Started systemd-resolved.service. Oct 31 01:13:32.176042 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:13:32.184525 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 01:13:32.190285 systemd[1]: Reached target nss-lookup.target. Oct 31 01:13:32.207469 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 01:13:32.208899 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:13:32.209417 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:13:32.219689 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 01:13:32.227752 kernel: audit: type=1130 audit(1761873212.220:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.221971 systemd[1]: Starting dracut-cmdline.service... Oct 31 01:13:32.231207 dracut-cmdline[220]: dracut-dracut-053 Oct 31 01:13:32.238744 kernel: audit: type=1130 audit(1761873212.232:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.230139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:13:32.240379 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 01:13:32.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.245223 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:13:32.255829 kernel: audit: type=1130 audit(1761873212.249:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.292635 kernel: Loading iSCSI transport class v2.0-870. Oct 31 01:13:32.308638 kernel: iscsi: registered transport (tcp) Oct 31 01:13:32.329642 kernel: iscsi: registered transport (qla4xxx) Oct 31 01:13:32.329686 kernel: QLogic iSCSI HBA Driver Oct 31 01:13:32.353875 systemd[1]: Finished dracut-cmdline.service. Oct 31 01:13:32.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.356664 systemd[1]: Starting dracut-pre-udev.service... Oct 31 01:13:32.401633 kernel: raid6: avx2x4 gen() 30672 MB/s Oct 31 01:13:32.419625 kernel: raid6: avx2x4 xor() 8156 MB/s Oct 31 01:13:32.437624 kernel: raid6: avx2x2 gen() 32519 MB/s Oct 31 01:13:32.455624 kernel: raid6: avx2x2 xor() 19257 MB/s Oct 31 01:13:32.473623 kernel: raid6: avx2x1 gen() 26645 MB/s Oct 31 01:13:32.491624 kernel: raid6: avx2x1 xor() 15372 MB/s Oct 31 01:13:32.509623 kernel: raid6: sse2x4 gen() 14852 MB/s Oct 31 01:13:32.527625 kernel: raid6: sse2x4 xor() 7509 MB/s Oct 31 01:13:32.545623 kernel: raid6: sse2x2 gen() 16407 MB/s Oct 31 01:13:32.563624 kernel: raid6: sse2x2 xor() 9855 MB/s Oct 31 01:13:32.581624 kernel: raid6: sse2x1 gen() 11951 MB/s Oct 31 01:13:32.599980 kernel: raid6: sse2x1 xor() 7786 MB/s Oct 31 01:13:32.599993 kernel: raid6: using algorithm avx2x2 gen() 32519 MB/s Oct 31 01:13:32.600002 kernel: raid6: .... xor() 19257 MB/s, rmw enabled Oct 31 01:13:32.602386 kernel: raid6: using avx2x2 recovery algorithm Oct 31 01:13:32.614629 kernel: xor: automatically using best checksumming function avx Oct 31 01:13:32.703638 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 31 01:13:32.710727 systemd[1]: Finished dracut-pre-udev.service. Oct 31 01:13:32.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.712000 audit: BPF prog-id=7 op=LOAD Oct 31 01:13:32.712000 audit: BPF prog-id=8 op=LOAD Oct 31 01:13:32.713995 systemd[1]: Starting systemd-udevd.service... Oct 31 01:13:32.726584 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 31 01:13:32.730647 systemd[1]: Started systemd-udevd.service. Oct 31 01:13:32.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.733266 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 01:13:32.741748 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Oct 31 01:13:32.762638 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 01:13:32.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.765219 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:13:32.803226 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:13:32.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:32.836766 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 31 01:13:32.846870 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 01:13:32.846883 kernel: GPT:9289727 != 19775487 Oct 31 01:13:32.846892 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 01:13:32.846901 kernel: GPT:9289727 != 19775487 Oct 31 01:13:32.846909 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 01:13:32.846918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:13:32.848628 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 01:13:32.850627 kernel: libata version 3.00 loaded. Oct 31 01:13:32.859450 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 01:13:32.887354 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 01:13:32.887371 kernel: AVX2 version of gcm_enc/dec engaged. Oct 31 01:13:32.887381 kernel: AES CTR mode by8 optimization enabled Oct 31 01:13:32.887390 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 01:13:32.887490 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 01:13:32.887573 kernel: scsi host0: ahci Oct 31 01:13:32.887704 kernel: scsi host1: ahci Oct 31 01:13:32.887793 kernel: scsi host2: ahci Oct 31 01:13:32.887896 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (446) Oct 31 01:13:32.887906 kernel: scsi host3: ahci Oct 31 01:13:32.887993 kernel: scsi host4: ahci Oct 31 01:13:32.888081 kernel: scsi host5: ahci Oct 31 01:13:32.888180 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Oct 31 01:13:32.888190 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Oct 31 01:13:32.888198 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Oct 31 01:13:32.888207 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Oct 31 01:13:32.888216 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Oct 31 01:13:32.888225 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Oct 31 01:13:32.875595 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 01:13:32.957250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 01:13:32.969262 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 01:13:32.976264 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 01:13:32.980993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:13:32.983223 systemd[1]: Starting disk-uuid.service... Oct 31 01:13:32.990897 disk-uuid[528]: Primary Header is updated. Oct 31 01:13:32.990897 disk-uuid[528]: Secondary Entries is updated. Oct 31 01:13:32.990897 disk-uuid[528]: Secondary Header is updated. Oct 31 01:13:32.997631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:13:33.001630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:13:33.193659 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 01:13:33.193750 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 01:13:33.201630 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 01:13:33.201656 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 01:13:33.203652 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 01:13:33.207928 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 01:13:33.207955 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 01:13:33.207978 kernel: ata3.00: applying bridge limits Oct 31 01:13:33.210227 kernel: ata3.00: configured for UDMA/100 Oct 31 01:13:33.210642 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 01:13:33.245853 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 01:13:33.263202 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 01:13:33.263232 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 01:13:33.997639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:13:33.997973 disk-uuid[529]: The operation has completed successfully. Oct 31 01:13:34.021113 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 01:13:34.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.021191 systemd[1]: Finished disk-uuid.service. Oct 31 01:13:34.026061 systemd[1]: Starting verity-setup.service... Oct 31 01:13:34.038646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 31 01:13:34.054638 systemd[1]: Found device dev-mapper-usr.device. Oct 31 01:13:34.058846 systemd[1]: Mounting sysusr-usr.mount... Oct 31 01:13:34.061690 systemd[1]: Finished verity-setup.service. Oct 31 01:13:34.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.119629 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 01:13:34.119841 systemd[1]: Mounted sysusr-usr.mount. Oct 31 01:13:34.120072 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 31 01:13:34.120860 systemd[1]: Starting ignition-setup.service... Oct 31 01:13:34.122944 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 01:13:34.136435 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:13:34.136461 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:13:34.136473 kernel: BTRFS info (device vda6): has skinny extents Oct 31 01:13:34.143829 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 01:13:34.151080 systemd[1]: Finished ignition-setup.service. Oct 31 01:13:34.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.154818 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 01:13:34.193830 ignition[652]: Ignition 2.14.0 Oct 31 01:13:34.193843 ignition[652]: Stage: fetch-offline Oct 31 01:13:34.193928 ignition[652]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:34.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.200000 audit: BPF prog-id=9 op=LOAD Oct 31 01:13:34.196275 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 01:13:34.193937 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:34.201450 systemd[1]: Starting systemd-networkd.service... Oct 31 01:13:34.194041 ignition[652]: parsed url from cmdline: "" Oct 31 01:13:34.194044 ignition[652]: no config URL provided Oct 31 01:13:34.194049 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:13:34.194055 ignition[652]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:13:34.194072 ignition[652]: op(1): [started] loading QEMU firmware config module Oct 31 01:13:34.194077 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 01:13:34.196986 ignition[652]: op(1): [finished] loading QEMU firmware config module Oct 31 01:13:34.289263 ignition[652]: parsing config with SHA512: 333faaf37494e4ec5c885d2c24541017d656917ad95230949fc50dc0836d3c150d598f8530626bb16f0cea6e00fdc99e44dba33fba84f2fa8c248a9ea3480f38 Oct 31 01:13:34.297304 unknown[652]: fetched base config from "system" Oct 31 01:13:34.297322 unknown[652]: fetched user config from "qemu" Oct 31 01:13:34.300333 ignition[652]: fetch-offline: fetch-offline passed Oct 31 01:13:34.301694 ignition[652]: Ignition finished successfully Oct 31 01:13:34.303985 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 01:13:34.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.309033 systemd-networkd[721]: lo: Link UP Oct 31 01:13:34.309044 systemd-networkd[721]: lo: Gained carrier Oct 31 01:13:34.309512 systemd-networkd[721]: Enumeration completed Oct 31 01:13:34.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.309736 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:13:34.310646 systemd[1]: Started systemd-networkd.service. Oct 31 01:13:34.311032 systemd-networkd[721]: eth0: Link UP Oct 31 01:13:34.311037 systemd-networkd[721]: eth0: Gained carrier Oct 31 01:13:34.314895 systemd[1]: Reached target network.target. Oct 31 01:13:34.319394 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 01:13:34.326435 systemd[1]: Starting ignition-kargs.service... Oct 31 01:13:34.329725 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 01:13:34.329894 systemd[1]: Starting iscsiuio.service... Oct 31 01:13:34.335603 systemd[1]: Started iscsiuio.service. Oct 31 01:13:34.337198 ignition[723]: Ignition 2.14.0 Oct 31 01:13:34.337211 ignition[723]: Stage: kargs Oct 31 01:13:34.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.337304 ignition[723]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:34.337315 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:34.338369 ignition[723]: kargs: kargs passed Oct 31 01:13:34.338404 ignition[723]: Ignition finished successfully Oct 31 01:13:34.345523 systemd[1]: Finished ignition-kargs.service. Oct 31 01:13:34.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.348979 systemd[1]: Starting ignition-disks.service... Oct 31 01:13:34.352206 systemd[1]: Starting iscsid.service... Oct 31 01:13:34.356248 ignition[733]: Ignition 2.14.0 Oct 31 01:13:34.356259 ignition[733]: Stage: disks Oct 31 01:13:34.359987 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:13:34.359987 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 01:13:34.359987 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 01:13:34.359987 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 01:13:34.359987 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 01:13:34.359987 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 01:13:34.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.358269 systemd[1]: Started iscsid.service. Oct 31 01:13:34.356357 ignition[733]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:34.360204 systemd[1]: Finished ignition-disks.service. Oct 31 01:13:34.356367 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:34.363724 systemd[1]: Reached target initrd-root-device.target. Oct 31 01:13:34.357593 ignition[733]: disks: disks passed Oct 31 01:13:34.373428 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:13:34.357649 ignition[733]: Ignition finished successfully Oct 31 01:13:34.376360 systemd[1]: Reached target local-fs.target. Oct 31 01:13:34.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.379528 systemd[1]: Reached target sysinit.target. Oct 31 01:13:34.383021 systemd[1]: Reached target basic.target. Oct 31 01:13:34.387209 systemd[1]: Starting dracut-initqueue.service... Oct 31 01:13:34.398793 systemd[1]: Finished dracut-initqueue.service. Oct 31 01:13:34.401108 systemd[1]: Reached target remote-fs-pre.target. Oct 31 01:13:34.403678 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:13:34.405119 systemd[1]: Reached target remote-fs.target. Oct 31 01:13:34.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.407025 systemd[1]: Starting dracut-pre-mount.service... Oct 31 01:13:34.415453 systemd[1]: Finished dracut-pre-mount.service. Oct 31 01:13:34.418544 systemd[1]: Starting systemd-fsck-root.service... Oct 31 01:13:34.429713 systemd-fsck[756]: ROOT: clean, 637/553520 files, 56032/553472 blocks Oct 31 01:13:34.434925 systemd[1]: Finished systemd-fsck-root.service. Oct 31 01:13:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.440103 systemd[1]: Mounting sysroot.mount... Oct 31 01:13:34.447627 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 01:13:34.447964 systemd[1]: Mounted sysroot.mount. Oct 31 01:13:34.450246 systemd[1]: Reached target initrd-root-fs.target. Oct 31 01:13:34.453753 systemd[1]: Mounting sysroot-usr.mount... Oct 31 01:13:34.456294 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 01:13:34.456331 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 01:13:34.456351 systemd[1]: Reached target ignition-diskful.target. Oct 31 01:13:34.464719 systemd[1]: Mounted sysroot-usr.mount. Oct 31 01:13:34.467709 systemd[1]: Starting initrd-setup-root.service... Oct 31 01:13:34.472691 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 01:13:34.477923 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Oct 31 01:13:34.481678 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 01:13:34.486555 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 01:13:34.512864 systemd[1]: Finished initrd-setup-root.service. Oct 31 01:13:34.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.516162 systemd[1]: Starting ignition-mount.service... Oct 31 01:13:34.518582 systemd[1]: Starting sysroot-boot.service... Oct 31 01:13:34.523663 bash[807]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 01:13:34.535868 ignition[809]: INFO : Ignition 2.14.0 Oct 31 01:13:34.535868 ignition[809]: INFO : Stage: mount Oct 31 01:13:34.538410 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:34.538410 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:34.538410 ignition[809]: INFO : mount: mount passed Oct 31 01:13:34.538410 ignition[809]: INFO : Ignition finished successfully Oct 31 01:13:34.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:34.538149 systemd[1]: Finished ignition-mount.service. Oct 31 01:13:34.548666 systemd[1]: Finished sysroot-boot.service. Oct 31 01:13:34.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:35.068397 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 01:13:35.077027 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (818) Oct 31 01:13:35.077062 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:13:35.077072 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:13:35.078446 kernel: BTRFS info (device vda6): has skinny extents Oct 31 01:13:35.082943 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 01:13:35.083680 systemd[1]: Starting ignition-files.service... Oct 31 01:13:35.096819 ignition[838]: INFO : Ignition 2.14.0 Oct 31 01:13:35.096819 ignition[838]: INFO : Stage: files Oct 31 01:13:35.099335 ignition[838]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:35.100876 ignition[838]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:35.103686 ignition[838]: DEBUG : files: compiled without relabeling support, skipping Oct 31 01:13:35.105575 ignition[838]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 01:13:35.105575 ignition[838]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 01:13:35.110123 ignition[838]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 01:13:35.110123 ignition[838]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 01:13:35.110123 ignition[838]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 01:13:35.109782 unknown[838]: wrote ssh authorized keys file for user: core Oct 31 01:13:35.118626 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:13:35.118626 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 01:13:35.118626 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:13:35.118626 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 01:13:35.160045 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 01:13:35.243495 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 01:13:35.246790 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 31 01:13:35.249623 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 01:13:35.249623 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:13:35.255314 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:13:35.258132 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:13:35.261017 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:13:35.261017 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:13:35.266704 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:13:35.269633 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:13:35.272546 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:13:35.272546 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:13:35.279682 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:13:35.283845 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:13:35.287486 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 01:13:35.601227 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 31 01:13:35.905841 systemd-networkd[721]: eth0: Gained IPv6LL Oct 31 01:13:36.194622 ignition[838]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 01:13:36.194622 ignition[838]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 01:13:36.201340 ignition[838]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:13:36.249133 ignition[838]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 01:13:36.251996 ignition[838]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 01:13:36.251996 ignition[838]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Oct 31 01:13:36.251996 ignition[838]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 01:13:36.251996 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:13:36.251996 ignition[838]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:13:36.251996 ignition[838]: INFO : files: files passed Oct 31 01:13:36.251996 ignition[838]: INFO : Ignition finished successfully Oct 31 01:13:36.267828 systemd[1]: Finished ignition-files.service. Oct 31 01:13:36.277155 kernel: kauditd_printk_skb: 25 callbacks suppressed Oct 31 01:13:36.277185 kernel: audit: type=1130 audit(1761873216.268:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.277130 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 01:13:36.278636 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 01:13:36.279203 systemd[1]: Starting ignition-quench.service... Oct 31 01:13:36.297178 kernel: audit: type=1130 audit(1761873216.285:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.297200 kernel: audit: type=1131 audit(1761873216.285:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.282468 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 01:13:36.282623 systemd[1]: Finished ignition-quench.service. Oct 31 01:13:36.303310 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 31 01:13:36.306977 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:13:36.309871 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 01:13:36.318155 kernel: audit: type=1130 audit(1761873216.309:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.310079 systemd[1]: Reached target ignition-complete.target. Oct 31 01:13:36.321263 systemd[1]: Starting initrd-parse-etc.service... Oct 31 01:13:36.337869 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 01:13:36.337983 systemd[1]: Finished initrd-parse-etc.service. Oct 31 01:13:36.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.341082 systemd[1]: Reached target initrd-fs.target. Oct 31 01:13:36.356507 kernel: audit: type=1130 audit(1761873216.340:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.356536 kernel: audit: type=1131 audit(1761873216.340:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.352579 systemd[1]: Reached target initrd.target. Oct 31 01:13:36.353204 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 01:13:36.353937 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 01:13:36.369897 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 01:13:36.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.372268 systemd[1]: Starting initrd-cleanup.service... Oct 31 01:13:36.381662 kernel: audit: type=1130 audit(1761873216.370:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.385564 systemd[1]: Stopped target nss-lookup.target. Oct 31 01:13:36.385773 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 01:13:36.389889 systemd[1]: Stopped target timers.target. Oct 31 01:13:36.391428 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 01:13:36.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.391546 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 01:13:36.405183 kernel: audit: type=1131 audit(1761873216.394:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.395378 systemd[1]: Stopped target initrd.target. Oct 31 01:13:36.402498 systemd[1]: Stopped target basic.target. Oct 31 01:13:36.406724 systemd[1]: Stopped target ignition-complete.target. Oct 31 01:13:36.409784 systemd[1]: Stopped target ignition-diskful.target. Oct 31 01:13:36.411056 systemd[1]: Stopped target initrd-root-device.target. Oct 31 01:13:36.415349 systemd[1]: Stopped target remote-fs.target. Oct 31 01:13:36.416881 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 01:13:36.420860 systemd[1]: Stopped target sysinit.target. Oct 31 01:13:36.423432 systemd[1]: Stopped target local-fs.target. Oct 31 01:13:36.426126 systemd[1]: Stopped target local-fs-pre.target. Oct 31 01:13:36.428815 systemd[1]: Stopped target swap.target. Oct 31 01:13:36.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.430081 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 01:13:36.442720 kernel: audit: type=1131 audit(1761873216.433:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.430257 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 01:13:36.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.433955 systemd[1]: Stopped target cryptsetup.target. Oct 31 01:13:36.439962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 01:13:36.455581 kernel: audit: type=1131 audit(1761873216.443:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.440144 systemd[1]: Stopped dracut-initqueue.service. Oct 31 01:13:36.444216 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 01:13:36.444394 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 01:13:36.450504 systemd[1]: Stopped target paths.target. Oct 31 01:13:36.454030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 01:13:36.458660 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 01:13:36.461487 systemd[1]: Stopped target slices.target. Oct 31 01:13:36.463871 systemd[1]: Stopped target sockets.target. Oct 31 01:13:36.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.466910 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 01:13:36.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.467015 systemd[1]: Closed iscsid.socket. Oct 31 01:13:36.468383 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 01:13:36.468494 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 01:13:36.472113 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 01:13:36.472197 systemd[1]: Stopped ignition-files.service. Oct 31 01:13:36.486733 ignition[878]: INFO : Ignition 2.14.0 Oct 31 01:13:36.486733 ignition[878]: INFO : Stage: umount Oct 31 01:13:36.486733 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:13:36.486733 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 01:13:36.486733 ignition[878]: INFO : umount: umount passed Oct 31 01:13:36.486733 ignition[878]: INFO : Ignition finished successfully Oct 31 01:13:36.475472 systemd[1]: Stopping ignition-mount.service... Oct 31 01:13:36.477503 systemd[1]: Stopping iscsiuio.service... Oct 31 01:13:36.498786 systemd[1]: Stopping sysroot-boot.service... Oct 31 01:13:36.501597 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 01:13:36.503573 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 01:13:36.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.506887 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 01:13:36.508687 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 01:13:36.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.514154 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 01:13:36.516328 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 31 01:13:36.517848 systemd[1]: Stopped iscsiuio.service. Oct 31 01:13:36.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.520805 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 01:13:36.520879 systemd[1]: Stopped ignition-mount.service. Oct 31 01:13:36.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.525333 systemd[1]: Stopped target network.target. Oct 31 01:13:36.528238 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 01:13:36.528284 systemd[1]: Closed iscsiuio.socket. Oct 31 01:13:36.532140 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 01:13:36.532178 systemd[1]: Stopped ignition-disks.service. Oct 31 01:13:36.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.536217 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 01:13:36.536255 systemd[1]: Stopped ignition-kargs.service. Oct 31 01:13:36.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.539177 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 01:13:36.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.540345 systemd[1]: Stopped ignition-setup.service. Oct 31 01:13:36.543389 systemd[1]: Stopping systemd-networkd.service... Oct 31 01:13:36.546017 systemd[1]: Stopping systemd-resolved.service... Oct 31 01:13:36.549669 systemd-networkd[721]: eth0: DHCPv6 lease lost Oct 31 01:13:36.551772 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 01:13:36.553388 systemd[1]: Stopped systemd-networkd.service. Oct 31 01:13:36.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.557131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 01:13:36.558827 systemd[1]: Finished initrd-cleanup.service. Oct 31 01:13:36.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.561801 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 01:13:36.563573 systemd[1]: Stopped systemd-resolved.service. Oct 31 01:13:36.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.567000 audit: BPF prog-id=9 op=UNLOAD Oct 31 01:13:36.567803 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 01:13:36.567831 systemd[1]: Closed systemd-networkd.socket. Oct 31 01:13:36.571000 audit: BPF prog-id=6 op=UNLOAD Oct 31 01:13:36.572726 systemd[1]: Stopping network-cleanup.service... Oct 31 01:13:36.575728 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 01:13:36.575792 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 01:13:36.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.580804 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 01:13:36.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.580841 systemd[1]: Stopped systemd-sysctl.service. Oct 31 01:13:36.583836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 01:13:36.585017 systemd[1]: Stopped systemd-modules-load.service. Oct 31 01:13:36.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.589340 systemd[1]: Stopping systemd-udevd.service... Oct 31 01:13:36.593828 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 01:13:36.596453 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 01:13:36.597970 systemd[1]: Stopped sysroot-boot.service. Oct 31 01:13:36.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.600766 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 01:13:36.602325 systemd[1]: Stopped systemd-udevd.service. Oct 31 01:13:36.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.606018 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 01:13:36.606092 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 01:13:36.610681 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 01:13:36.610724 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 01:13:36.614898 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 01:13:36.614948 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 01:13:36.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.619051 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 01:13:36.619097 systemd[1]: Stopped dracut-cmdline.service. Oct 31 01:13:36.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.623359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:13:36.623406 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 01:13:36.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.627599 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 01:13:36.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.627665 systemd[1]: Stopped initrd-setup-root.service. Oct 31 01:13:36.632810 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 01:13:36.635835 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 01:13:36.637636 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 31 01:13:36.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.640935 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 01:13:36.642563 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 01:13:36.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.645401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:13:36.645447 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 01:13:36.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.650912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 31 01:13:36.653684 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 01:13:36.655312 systemd[1]: Stopped network-cleanup.service. Oct 31 01:13:36.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.658247 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 01:13:36.660088 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 01:13:36.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:36.663378 systemd[1]: Reached target initrd-switch-root.target. Oct 31 01:13:36.667121 systemd[1]: Starting initrd-switch-root.service... Oct 31 01:13:36.673218 systemd[1]: Switching root. Oct 31 01:13:36.674000 audit: BPF prog-id=5 op=UNLOAD Oct 31 01:13:36.675000 audit: BPF prog-id=4 op=UNLOAD Oct 31 01:13:36.675000 audit: BPF prog-id=3 op=UNLOAD Oct 31 01:13:36.677000 audit: BPF prog-id=8 op=UNLOAD Oct 31 01:13:36.677000 audit: BPF prog-id=7 op=UNLOAD Oct 31 01:13:36.695426 iscsid[739]: iscsid shutting down. Oct 31 01:13:36.696586 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Oct 31 01:13:36.696673 systemd-journald[198]: Journal stopped Oct 31 01:13:39.599696 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 01:13:39.599752 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 01:13:39.599762 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 01:13:39.599772 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 01:13:39.599782 kernel: SELinux: policy capability open_perms=1 Oct 31 01:13:39.599795 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 01:13:39.599808 kernel: SELinux: policy capability always_check_network=0 Oct 31 01:13:39.599818 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 01:13:39.599828 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 01:13:39.599838 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 01:13:39.599847 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 01:13:39.599857 systemd[1]: Successfully loaded SELinux policy in 47.132ms. Oct 31 01:13:39.599876 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.571ms. Oct 31 01:13:39.599887 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 01:13:39.599898 systemd[1]: Detected virtualization kvm. Oct 31 01:13:39.599911 systemd[1]: Detected architecture x86-64. Oct 31 01:13:39.599923 systemd[1]: Detected first boot. Oct 31 01:13:39.599933 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:13:39.599944 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 01:13:39.599964 systemd[1]: Populated /etc with preset unit settings. Oct 31 01:13:39.599976 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:13:39.599993 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:13:39.600005 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:13:39.600017 systemd[1]: Queued start job for default target multi-user.target. Oct 31 01:13:39.600027 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 31 01:13:39.600038 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 01:13:39.600049 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 01:13:39.600060 systemd[1]: Created slice system-getty.slice. Oct 31 01:13:39.600072 systemd[1]: Created slice system-modprobe.slice. Oct 31 01:13:39.600082 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 01:13:39.600093 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 01:13:39.600103 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 01:13:39.600117 systemd[1]: Created slice user.slice. Oct 31 01:13:39.600128 systemd[1]: Started systemd-ask-password-console.path. Oct 31 01:13:39.600138 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 01:13:39.600149 systemd[1]: Set up automount boot.automount. Oct 31 01:13:39.600160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 01:13:39.600171 systemd[1]: Reached target integritysetup.target. Oct 31 01:13:39.600181 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 01:13:39.600192 systemd[1]: Reached target remote-fs.target. Oct 31 01:13:39.600203 systemd[1]: Reached target slices.target. Oct 31 01:13:39.600213 systemd[1]: Reached target swap.target. Oct 31 01:13:39.600223 systemd[1]: Reached target torcx.target. Oct 31 01:13:39.600234 systemd[1]: Reached target veritysetup.target. Oct 31 01:13:39.600245 systemd[1]: Listening on systemd-coredump.socket. Oct 31 01:13:39.600255 systemd[1]: Listening on systemd-initctl.socket. Oct 31 01:13:39.600266 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 01:13:39.600277 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 01:13:39.600287 systemd[1]: Listening on systemd-journald.socket. Oct 31 01:13:39.600299 systemd[1]: Listening on systemd-networkd.socket. Oct 31 01:13:39.600309 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 01:13:39.600320 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 01:13:39.600331 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 01:13:39.600341 systemd[1]: Mounting dev-hugepages.mount... Oct 31 01:13:39.600354 systemd[1]: Mounting dev-mqueue.mount... Oct 31 01:13:39.600365 systemd[1]: Mounting media.mount... Oct 31 01:13:39.600376 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:39.600386 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 01:13:39.600397 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 01:13:39.600407 systemd[1]: Mounting tmp.mount... Oct 31 01:13:39.600417 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 01:13:39.600428 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:13:39.600439 systemd[1]: Starting kmod-static-nodes.service... Oct 31 01:13:39.600450 systemd[1]: Starting modprobe@configfs.service... Oct 31 01:13:39.600462 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:13:39.600479 systemd[1]: Starting modprobe@drm.service... Oct 31 01:13:39.600490 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:13:39.600501 systemd[1]: Starting modprobe@fuse.service... Oct 31 01:13:39.600511 systemd[1]: Starting modprobe@loop.service... Oct 31 01:13:39.600522 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 01:13:39.600534 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 01:13:39.600544 kernel: loop: module loaded Oct 31 01:13:39.600556 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 01:13:39.600567 kernel: fuse: init (API version 7.34) Oct 31 01:13:39.600578 systemd[1]: Starting systemd-journald.service... Oct 31 01:13:39.600588 systemd[1]: Starting systemd-modules-load.service... Oct 31 01:13:39.600599 systemd[1]: Starting systemd-network-generator.service... Oct 31 01:13:39.600637 systemd[1]: Starting systemd-remount-fs.service... Oct 31 01:13:39.600648 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 01:13:39.600659 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:39.600672 systemd-journald[1038]: Journal started Oct 31 01:13:39.600714 systemd-journald[1038]: Runtime Journal (/run/log/journal/85770368ad6d438f96217ddea72334b2) is 6.0M, max 48.5M, 42.5M free. Oct 31 01:13:39.490000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 01:13:39.490000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 01:13:39.598000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 01:13:39.598000 audit[1038]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd79f850f0 a2=4000 a3=7ffd79f8518c items=0 ppid=1 pid=1038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:39.598000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 01:13:39.608467 systemd[1]: Started systemd-journald.service. Oct 31 01:13:39.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.609529 systemd[1]: Mounted dev-hugepages.mount. Oct 31 01:13:39.611024 systemd[1]: Mounted dev-mqueue.mount. Oct 31 01:13:39.612432 systemd[1]: Mounted media.mount. Oct 31 01:13:39.613778 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 01:13:39.615268 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 01:13:39.616838 systemd[1]: Mounted tmp.mount. Oct 31 01:13:39.618427 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 01:13:39.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.620295 systemd[1]: Finished kmod-static-nodes.service. Oct 31 01:13:39.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.622023 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 01:13:39.622198 systemd[1]: Finished modprobe@configfs.service. Oct 31 01:13:39.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.623982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:13:39.624200 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:13:39.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.625908 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:13:39.626118 systemd[1]: Finished modprobe@drm.service. Oct 31 01:13:39.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.627771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:13:39.627932 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:13:39.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.629961 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 01:13:39.630165 systemd[1]: Finished modprobe@fuse.service. Oct 31 01:13:39.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.631807 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:13:39.632037 systemd[1]: Finished modprobe@loop.service. Oct 31 01:13:39.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.633920 systemd[1]: Finished systemd-modules-load.service. Oct 31 01:13:39.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.636016 systemd[1]: Finished systemd-network-generator.service. Oct 31 01:13:39.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.638235 systemd[1]: Finished systemd-remount-fs.service. Oct 31 01:13:39.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.640387 systemd[1]: Reached target network-pre.target. Oct 31 01:13:39.643355 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 01:13:39.645870 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 01:13:39.647139 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 01:13:39.648447 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 01:13:39.651796 systemd[1]: Starting systemd-journal-flush.service... Oct 31 01:13:39.653447 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:13:39.654520 systemd[1]: Starting systemd-random-seed.service... Oct 31 01:13:39.656022 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:13:39.656259 systemd-journald[1038]: Time spent on flushing to /var/log/journal/85770368ad6d438f96217ddea72334b2 is 15.501ms for 1036 entries. Oct 31 01:13:39.656259 systemd-journald[1038]: System Journal (/var/log/journal/85770368ad6d438f96217ddea72334b2) is 8.0M, max 195.6M, 187.6M free. Oct 31 01:13:39.699054 systemd-journald[1038]: Received client request to flush runtime journal. Oct 31 01:13:39.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.656929 systemd[1]: Starting systemd-sysctl.service... Oct 31 01:13:39.660860 systemd[1]: Starting systemd-sysusers.service... Oct 31 01:13:39.665220 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 01:13:39.699898 udevadm[1063]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 01:13:39.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:39.666916 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 01:13:39.668652 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 01:13:39.671083 systemd[1]: Finished systemd-random-seed.service. Oct 31 01:13:39.673515 systemd[1]: Reached target first-boot-complete.target. Oct 31 01:13:39.676805 systemd[1]: Starting systemd-udev-settle.service... Oct 31 01:13:39.678534 systemd[1]: Finished systemd-sysusers.service. Oct 31 01:13:39.681223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 01:13:39.685070 systemd[1]: Finished systemd-sysctl.service. Oct 31 01:13:39.700033 systemd[1]: Finished systemd-journal-flush.service. Oct 31 01:13:39.704518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 01:13:39.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.103707 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 01:13:40.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.106914 systemd[1]: Starting systemd-udevd.service... Oct 31 01:13:40.123199 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Oct 31 01:13:40.137147 systemd[1]: Started systemd-udevd.service. Oct 31 01:13:40.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.141852 systemd[1]: Starting systemd-networkd.service... Oct 31 01:13:40.147941 systemd[1]: Starting systemd-userdbd.service... Oct 31 01:13:40.174950 systemd[1]: Found device dev-ttyS0.device. Oct 31 01:13:40.184270 systemd[1]: Started systemd-userdbd.service. Oct 31 01:13:40.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.189654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 01:13:40.213653 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 31 01:13:40.221630 kernel: ACPI: button: Power Button [PWRF] Oct 31 01:13:40.229346 systemd-networkd[1085]: lo: Link UP Oct 31 01:13:40.229630 systemd-networkd[1085]: lo: Gained carrier Oct 31 01:13:40.230017 systemd-networkd[1085]: Enumeration completed Oct 31 01:13:40.230198 systemd[1]: Started systemd-networkd.service. Oct 31 01:13:40.230360 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:13:40.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.232461 systemd-networkd[1085]: eth0: Link UP Oct 31 01:13:40.232546 systemd-networkd[1085]: eth0: Gained carrier Oct 31 01:13:40.235000 audit[1080]: AVC avc: denied { confidentiality } for pid=1080 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 31 01:13:40.235000 audit[1080]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55aa74257ed0 a1=338ec a2=7f761934dbc5 a3=5 items=110 ppid=1074 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:40.235000 audit: CWD cwd="/" Oct 31 01:13:40.235000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=1 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=2 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=3 name=(null) inode=15640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=4 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=5 name=(null) inode=15641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=6 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=7 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=8 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=9 name=(null) inode=15643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=10 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=11 name=(null) inode=15644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=12 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=13 name=(null) inode=15645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=14 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=15 name=(null) inode=15646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=16 name=(null) inode=15642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=17 name=(null) inode=15647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=18 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=19 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=20 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=21 name=(null) inode=15649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=22 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=23 name=(null) inode=15650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=24 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=25 name=(null) inode=15651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=26 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=27 name=(null) inode=15652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=28 name=(null) inode=15648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=29 name=(null) inode=15653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=30 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=31 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=32 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=33 name=(null) inode=15655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=34 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=35 name=(null) inode=15656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=36 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=37 name=(null) inode=15657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=38 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=39 name=(null) inode=15658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=40 name=(null) inode=15654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=41 name=(null) inode=15659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=42 name=(null) inode=15639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=43 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=44 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=45 name=(null) inode=15661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=46 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=47 name=(null) inode=15662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=48 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=49 name=(null) inode=15663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=50 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=51 name=(null) inode=15664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=52 name=(null) inode=15660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=53 name=(null) inode=15665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=55 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=56 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=57 name=(null) inode=15667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=58 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=59 name=(null) inode=15668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=60 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=61 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=62 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=63 name=(null) inode=15670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=64 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=65 name=(null) inode=15671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=66 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=67 name=(null) inode=15672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=68 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=69 name=(null) inode=15673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=70 name=(null) inode=15669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=71 name=(null) inode=15674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=72 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=73 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=74 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=75 name=(null) inode=15676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=76 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=77 name=(null) inode=15677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=78 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=79 name=(null) inode=15678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=80 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=81 name=(null) inode=15679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=82 name=(null) inode=15675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=83 name=(null) inode=15680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=84 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=85 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=86 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=87 name=(null) inode=15682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=88 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=89 name=(null) inode=15683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=90 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=91 name=(null) inode=15684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=92 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=93 name=(null) inode=15685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=94 name=(null) inode=15681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=95 name=(null) inode=15686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=96 name=(null) inode=15666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=97 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=98 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=99 name=(null) inode=15688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=100 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=101 name=(null) inode=15689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=102 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=103 name=(null) inode=15690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=104 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=105 name=(null) inode=15691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=106 name=(null) inode=15687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=107 name=(null) inode=15692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PATH item=109 name=(null) inode=15693 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:13:40.235000 audit: PROCTITLE proctitle="(udev-worker)" Oct 31 01:13:40.245741 systemd-networkd[1085]: eth0: DHCPv4 address 10.0.0.95/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 01:13:40.256078 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 01:13:40.258622 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 31 01:13:40.258647 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 01:13:40.258775 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 01:13:40.271635 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 01:13:40.333642 kernel: kvm: Nested Virtualization enabled Oct 31 01:13:40.333733 kernel: SVM: kvm: Nested Paging enabled Oct 31 01:13:40.333749 kernel: SVM: Virtual VMLOAD VMSAVE supported Oct 31 01:13:40.333762 kernel: SVM: Virtual GIF supported Oct 31 01:13:40.358629 kernel: EDAC MC: Ver: 3.0.0 Oct 31 01:13:40.385081 systemd[1]: Finished systemd-udev-settle.service. Oct 31 01:13:40.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.387792 systemd[1]: Starting lvm2-activation-early.service... Oct 31 01:13:40.397341 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:13:40.423545 systemd[1]: Finished lvm2-activation-early.service. Oct 31 01:13:40.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.425442 systemd[1]: Reached target cryptsetup.target. Oct 31 01:13:40.428100 systemd[1]: Starting lvm2-activation.service... Oct 31 01:13:40.431396 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:13:40.457117 systemd[1]: Finished lvm2-activation.service. Oct 31 01:13:40.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.458864 systemd[1]: Reached target local-fs-pre.target. Oct 31 01:13:40.460447 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 01:13:40.460476 systemd[1]: Reached target local-fs.target. Oct 31 01:13:40.462065 systemd[1]: Reached target machines.target. Oct 31 01:13:40.464721 systemd[1]: Starting ldconfig.service... Oct 31 01:13:40.466439 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:13:40.466479 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:40.467436 systemd[1]: Starting systemd-boot-update.service... Oct 31 01:13:40.469862 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 01:13:40.475211 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 01:13:40.478863 systemd[1]: Starting systemd-sysext.service... Oct 31 01:13:40.481492 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 01:13:40.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.484484 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1115 (bootctl) Oct 31 01:13:40.485916 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 01:13:40.492715 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 01:13:40.499134 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 01:13:40.499364 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 01:13:40.510642 kernel: loop0: detected capacity change from 0 to 224512 Oct 31 01:13:40.765406 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 01:13:40.766065 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 01:13:40.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.771641 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 01:13:40.772277 systemd-fsck[1128]: fsck.fat 4.2 (2021-01-31) Oct 31 01:13:40.772277 systemd-fsck[1128]: /dev/vda1: 790 files, 120772/258078 clusters Oct 31 01:13:40.774077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 01:13:40.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.778171 systemd[1]: Mounting boot.mount... Oct 31 01:13:40.785748 systemd[1]: Mounted boot.mount. Oct 31 01:13:40.793623 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 01:13:40.798413 (sd-sysext)[1137]: Using extensions 'kubernetes'. Oct 31 01:13:40.799013 (sd-sysext)[1137]: Merged extensions into '/usr'. Oct 31 01:13:40.800395 systemd[1]: Finished systemd-boot-update.service. Oct 31 01:13:40.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.818536 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:40.819728 ldconfig[1114]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 01:13:40.820391 systemd[1]: Mounting usr-share-oem.mount... Oct 31 01:13:40.821803 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:13:40.822994 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:13:40.825153 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:13:40.827680 systemd[1]: Starting modprobe@loop.service... Oct 31 01:13:40.828881 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:13:40.829037 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:40.829151 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:40.832087 systemd[1]: Finished ldconfig.service. Oct 31 01:13:40.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.833653 systemd[1]: Mounted usr-share-oem.mount. Oct 31 01:13:40.835133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:13:40.835278 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:13:40.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.836994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:13:40.837127 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:13:40.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.838887 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:13:40.839046 systemd[1]: Finished modprobe@loop.service. Oct 31 01:13:40.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.840694 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:13:40.840789 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:13:40.841659 systemd[1]: Finished systemd-sysext.service. Oct 31 01:13:40.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:40.844389 systemd[1]: Starting ensure-sysext.service... Oct 31 01:13:40.846554 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 01:13:40.851631 systemd[1]: Reloading. Oct 31 01:13:40.855071 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 01:13:40.855782 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 01:13:40.857155 systemd-tmpfiles[1153]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 01:13:40.905219 /usr/lib/systemd/system-generators/torcx-generator[1173]: time="2025-10-31T01:13:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:13:40.905247 /usr/lib/systemd/system-generators/torcx-generator[1173]: time="2025-10-31T01:13:40Z" level=info msg="torcx already run" Oct 31 01:13:40.984835 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:13:40.984852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:13:41.003617 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:13:41.053853 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 01:13:41.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.057968 systemd[1]: Starting audit-rules.service... Oct 31 01:13:41.060447 systemd[1]: Starting clean-ca-certificates.service... Oct 31 01:13:41.062838 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 01:13:41.065669 systemd[1]: Starting systemd-resolved.service... Oct 31 01:13:41.068363 systemd[1]: Starting systemd-timesyncd.service... Oct 31 01:13:41.070803 systemd[1]: Starting systemd-update-utmp.service... Oct 31 01:13:41.072771 systemd[1]: Finished clean-ca-certificates.service. Oct 31 01:13:41.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.074000 audit[1234]: SYSTEM_BOOT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.080773 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.081012 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.082214 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:13:41.084341 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:13:41.086535 systemd[1]: Starting modprobe@loop.service... Oct 31 01:13:41.087733 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.087850 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:41.087969 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:13:41.088043 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.089106 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 01:13:41.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.091099 systemd[1]: Finished systemd-update-utmp.service. Oct 31 01:13:41.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.092975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:13:41.093113 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:13:41.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.094757 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:13:41.094888 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:13:41.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.096694 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:13:41.096822 systemd[1]: Finished modprobe@loop.service. Oct 31 01:13:41.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.099019 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:13:41.099116 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.100387 systemd[1]: Starting systemd-update-done.service... Oct 31 01:13:41.103436 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.103627 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.104673 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:13:41.106765 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:13:41.108882 systemd[1]: Starting modprobe@loop.service... Oct 31 01:13:41.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.110039 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.110138 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:41.110221 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:13:41.110282 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.111151 systemd[1]: Finished systemd-update-done.service. Oct 31 01:13:41.112853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:13:41.112985 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:13:41.114692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:13:41.114815 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:13:41.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:41.117000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 01:13:41.117000 audit[1253]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdf9e718a0 a2=420 a3=0 items=0 ppid=1221 pid=1253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:41.117000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 01:13:41.118281 augenrules[1253]: No rules Oct 31 01:13:41.117955 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:13:41.118085 systemd[1]: Finished modprobe@loop.service. Oct 31 01:13:41.119770 systemd[1]: Finished audit-rules.service. Oct 31 01:13:41.121235 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:13:41.121316 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.123560 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.123807 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.124943 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 01:13:41.127443 systemd[1]: Starting modprobe@drm.service... Oct 31 01:13:41.129644 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 01:13:41.131942 systemd[1]: Starting modprobe@loop.service... Oct 31 01:13:41.133179 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.133282 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:41.134421 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 01:13:41.135921 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:13:41.136024 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:13:41.137520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:13:41.137762 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 01:13:41.139666 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:13:41.139846 systemd[1]: Finished modprobe@drm.service. Oct 31 01:13:41.141473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:13:41.141625 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 01:13:41.143313 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:13:41.143585 systemd[1]: Finished modprobe@loop.service. Oct 31 01:13:41.145321 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:13:41.145407 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.146429 systemd[1]: Finished ensure-sysext.service. Oct 31 01:13:41.154977 systemd[1]: Started systemd-timesyncd.service. Oct 31 01:13:41.156503 systemd[1]: Reached target time-set.target. Oct 31 01:13:41.158439 systemd-timesyncd[1232]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 01:13:41.158485 systemd-timesyncd[1232]: Initial clock synchronization to Fri 2025-10-31 01:13:41.209509 UTC. Oct 31 01:13:41.162274 systemd-resolved[1231]: Positive Trust Anchors: Oct 31 01:13:41.162286 systemd-resolved[1231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:13:41.162313 systemd-resolved[1231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 01:13:41.168776 systemd-resolved[1231]: Defaulting to hostname 'linux'. Oct 31 01:13:41.170112 systemd[1]: Started systemd-resolved.service. Oct 31 01:13:41.171494 systemd[1]: Reached target network.target. Oct 31 01:13:41.172757 systemd[1]: Reached target nss-lookup.target. Oct 31 01:13:41.174076 systemd[1]: Reached target sysinit.target. Oct 31 01:13:41.175400 systemd[1]: Started motdgen.path. Oct 31 01:13:41.176516 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 01:13:41.178376 systemd[1]: Started logrotate.timer. Oct 31 01:13:41.179572 systemd[1]: Started mdadm.timer. Oct 31 01:13:41.180642 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 01:13:41.182029 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 01:13:41.182107 systemd[1]: Reached target paths.target. Oct 31 01:13:41.183296 systemd[1]: Reached target timers.target. Oct 31 01:13:41.184757 systemd[1]: Listening on dbus.socket. Oct 31 01:13:41.186996 systemd[1]: Starting docker.socket... Oct 31 01:13:41.188815 systemd[1]: Listening on sshd.socket. Oct 31 01:13:41.190095 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:41.190362 systemd[1]: Listening on docker.socket. Oct 31 01:13:41.191605 systemd[1]: Reached target sockets.target. Oct 31 01:13:41.192866 systemd[1]: Reached target basic.target. Oct 31 01:13:41.194171 systemd[1]: System is tainted: cgroupsv1 Oct 31 01:13:41.194214 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.194233 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 01:13:41.195078 systemd[1]: Starting containerd.service... Oct 31 01:13:41.196982 systemd[1]: Starting dbus.service... Oct 31 01:13:41.198760 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 01:13:41.201082 systemd[1]: Starting extend-filesystems.service... Oct 31 01:13:41.202507 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 01:13:41.202921 jq[1285]: false Oct 31 01:13:41.203668 systemd[1]: Starting motdgen.service... Oct 31 01:13:41.207538 systemd[1]: Starting prepare-helm.service... Oct 31 01:13:41.210145 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 01:13:41.212316 systemd[1]: Starting sshd-keygen.service... Oct 31 01:13:41.219839 extend-filesystems[1286]: Found loop1 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found sr0 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda1 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda2 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda3 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found usr Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda4 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda6 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda7 Oct 31 01:13:41.219839 extend-filesystems[1286]: Found vda9 Oct 31 01:13:41.219839 extend-filesystems[1286]: Checking size of /dev/vda9 Oct 31 01:13:41.215176 systemd[1]: Starting systemd-logind.service... Oct 31 01:13:41.235749 dbus-daemon[1284]: [system] SELinux support is enabled Oct 31 01:13:41.217718 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 01:13:41.217779 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 01:13:41.254856 jq[1308]: true Oct 31 01:13:41.218807 systemd[1]: Starting update-engine.service... Oct 31 01:13:41.220915 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 01:13:41.255464 extend-filesystems[1286]: Resized partition /dev/vda9 Oct 31 01:13:41.257086 tar[1310]: linux-amd64/LICENSE Oct 31 01:13:41.257086 tar[1310]: linux-amd64/helm Oct 31 01:13:41.223586 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 01:13:41.257462 jq[1315]: true Oct 31 01:13:41.223869 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 01:13:41.224661 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 01:13:41.224879 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 01:13:41.226638 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 01:13:41.226842 systemd[1]: Finished motdgen.service. Oct 31 01:13:41.235943 systemd[1]: Started dbus.service. Oct 31 01:13:41.238947 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 01:13:41.238969 systemd[1]: Reached target system-config.target. Oct 31 01:13:41.240023 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 01:13:41.240041 systemd[1]: Reached target user-config.target. Oct 31 01:13:41.268447 extend-filesystems[1343]: resize2fs 1.46.5 (30-Dec-2021) Oct 31 01:13:41.273724 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 31 01:13:41.273753 bash[1340]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:13:41.263491 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 01:13:41.273890 update_engine[1306]: I1031 01:13:41.271441 1306 main.cc:92] Flatcar Update Engine starting Oct 31 01:13:41.280643 update_engine[1306]: I1031 01:13:41.277638 1306 update_check_scheduler.cc:74] Next update check in 4m50s Oct 31 01:13:41.280712 env[1316]: time="2025-10-31T01:13:41.278718678Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 01:13:41.278029 systemd[1]: Started update-engine.service. Oct 31 01:13:41.283194 systemd[1]: Started locksmithd.service. Oct 31 01:13:41.293623 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 31 01:13:41.312657 env[1316]: time="2025-10-31T01:13:41.308404650Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 01:13:41.312733 env[1316]: time="2025-10-31T01:13:41.312700797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.313455 extend-filesystems[1343]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 01:13:41.313455 extend-filesystems[1343]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 01:13:41.313455 extend-filesystems[1343]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 31 01:13:41.313010 systemd-logind[1300]: Watching system buttons on /dev/input/event1 (Power Button) Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314111172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314134055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314366381Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314381128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314392840Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314401527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314460497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314667215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314791037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:13:41.320141 env[1316]: time="2025-10-31T01:13:41.314803791Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 01:13:41.320333 extend-filesystems[1286]: Resized filesystem in /dev/vda9 Oct 31 01:13:41.313027 systemd-logind[1300]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 01:13:41.320491 env[1316]: time="2025-10-31T01:13:41.314843906Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 01:13:41.320491 env[1316]: time="2025-10-31T01:13:41.314854256Z" level=info msg="metadata content store policy set" policy=shared Oct 31 01:13:41.313702 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 01:13:41.313985 systemd[1]: Finished extend-filesystems.service. Oct 31 01:13:41.314352 systemd-logind[1300]: New seat seat0. Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322093682Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322130732Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322145579Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322176427Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322190013Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322204590Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322216032Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322228625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322241139Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322254123Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322265705Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322276575Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322373146Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 01:13:41.324623 env[1316]: time="2025-10-31T01:13:41.322437647Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322742018Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322764680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322776883Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322822469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322835353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322846684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322856392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322867463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322879205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322890647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322909562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.322924109Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.323019047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.323032833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.324910 env[1316]: time="2025-10-31T01:13:41.323043553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.325247 env[1316]: time="2025-10-31T01:13:41.323054103Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 01:13:41.325247 env[1316]: time="2025-10-31T01:13:41.323069823Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 01:13:41.325247 env[1316]: time="2025-10-31T01:13:41.323080463Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 01:13:41.325247 env[1316]: time="2025-10-31T01:13:41.323099799Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 01:13:41.325247 env[1316]: time="2025-10-31T01:13:41.323132160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.323302740Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.323348475Z" level=info msg="Connect containerd service" Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.323377119Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.323823646Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.324026206Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.324054910Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 01:13:41.325347 env[1316]: time="2025-10-31T01:13:41.324086499Z" level=info msg="containerd successfully booted in 0.046300s" Oct 31 01:13:41.327365 systemd[1]: Started containerd.service. Oct 31 01:13:41.330128 env[1316]: time="2025-10-31T01:13:41.330093134Z" level=info msg="Start subscribing containerd event" Oct 31 01:13:41.330202 systemd[1]: Started systemd-logind.service. Oct 31 01:13:41.331841 env[1316]: time="2025-10-31T01:13:41.330224861Z" level=info msg="Start recovering state" Oct 31 01:13:41.331841 env[1316]: time="2025-10-31T01:13:41.330291225Z" level=info msg="Start event monitor" Oct 31 01:13:41.331841 env[1316]: time="2025-10-31T01:13:41.330315361Z" level=info msg="Start snapshots syncer" Oct 31 01:13:41.331841 env[1316]: time="2025-10-31T01:13:41.330324307Z" level=info msg="Start cni network conf syncer for default" Oct 31 01:13:41.331841 env[1316]: time="2025-10-31T01:13:41.330331812Z" level=info msg="Start streaming server" Oct 31 01:13:41.349509 locksmithd[1348]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 01:13:41.655143 tar[1310]: linux-amd64/README.md Oct 31 01:13:41.659113 systemd[1]: Finished prepare-helm.service. Oct 31 01:13:41.729719 systemd-networkd[1085]: eth0: Gained IPv6LL Oct 31 01:13:41.731159 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 01:13:41.733049 systemd[1]: Reached target network-online.target. Oct 31 01:13:41.735737 systemd[1]: Starting kubelet.service... Oct 31 01:13:41.752761 sshd_keygen[1314]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 01:13:41.770281 systemd[1]: Finished sshd-keygen.service. Oct 31 01:13:41.773098 systemd[1]: Starting issuegen.service... Oct 31 01:13:41.779031 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 01:13:41.779241 systemd[1]: Finished issuegen.service. Oct 31 01:13:41.781838 systemd[1]: Starting systemd-user-sessions.service... Oct 31 01:13:41.788083 systemd[1]: Finished systemd-user-sessions.service. Oct 31 01:13:41.790750 systemd[1]: Started getty@tty1.service. Oct 31 01:13:41.793105 systemd[1]: Started serial-getty@ttyS0.service. Oct 31 01:13:41.794685 systemd[1]: Reached target getty.target. Oct 31 01:13:42.480665 systemd[1]: Started kubelet.service. Oct 31 01:13:42.482797 systemd[1]: Reached target multi-user.target. Oct 31 01:13:42.485782 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 01:13:42.497606 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 01:13:42.498256 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 01:13:42.511982 systemd[1]: Startup finished in 5.902s (kernel) + 5.771s (userspace) = 11.674s. Oct 31 01:13:43.201343 kubelet[1384]: E1031 01:13:43.201237 1384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:13:43.203572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:13:43.203818 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:13:45.298422 systemd[1]: Created slice system-sshd.slice. Oct 31 01:13:45.299890 systemd[1]: Started sshd@0-10.0.0.95:22-10.0.0.1:46888.service. Oct 31 01:13:45.335143 sshd[1394]: Accepted publickey for core from 10.0.0.1 port 46888 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.336717 sshd[1394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.344961 systemd[1]: Created slice user-500.slice. Oct 31 01:13:45.345915 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 01:13:45.347518 systemd-logind[1300]: New session 1 of user core. Oct 31 01:13:45.355005 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 01:13:45.356296 systemd[1]: Starting user@500.service... Oct 31 01:13:45.359759 (systemd)[1399]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.427579 systemd[1399]: Queued start job for default target default.target. Oct 31 01:13:45.427844 systemd[1399]: Reached target paths.target. Oct 31 01:13:45.427865 systemd[1399]: Reached target sockets.target. Oct 31 01:13:45.427882 systemd[1399]: Reached target timers.target. Oct 31 01:13:45.427896 systemd[1399]: Reached target basic.target. Oct 31 01:13:45.427941 systemd[1399]: Reached target default.target. Oct 31 01:13:45.427969 systemd[1399]: Startup finished in 63ms. Oct 31 01:13:45.428087 systemd[1]: Started user@500.service. Oct 31 01:13:45.429082 systemd[1]: Started session-1.scope. Oct 31 01:13:45.478968 systemd[1]: Started sshd@1-10.0.0.95:22-10.0.0.1:46898.service. Oct 31 01:13:45.510847 sshd[1408]: Accepted publickey for core from 10.0.0.1 port 46898 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.512112 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.515434 systemd-logind[1300]: New session 2 of user core. Oct 31 01:13:45.516320 systemd[1]: Started session-2.scope. Oct 31 01:13:45.571077 sshd[1408]: pam_unix(sshd:session): session closed for user core Oct 31 01:13:45.574080 systemd[1]: Started sshd@2-10.0.0.95:22-10.0.0.1:46902.service. Oct 31 01:13:45.574810 systemd[1]: sshd@1-10.0.0.95:22-10.0.0.1:46898.service: Deactivated successfully. Oct 31 01:13:45.575760 systemd-logind[1300]: Session 2 logged out. Waiting for processes to exit. Oct 31 01:13:45.575768 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 01:13:45.576667 systemd-logind[1300]: Removed session 2. Oct 31 01:13:45.603785 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 46902 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.604934 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.607971 systemd-logind[1300]: New session 3 of user core. Oct 31 01:13:45.608666 systemd[1]: Started session-3.scope. Oct 31 01:13:45.658063 sshd[1414]: pam_unix(sshd:session): session closed for user core Oct 31 01:13:45.660800 systemd[1]: Started sshd@3-10.0.0.95:22-10.0.0.1:46908.service. Oct 31 01:13:45.661552 systemd[1]: sshd@2-10.0.0.95:22-10.0.0.1:46902.service: Deactivated successfully. Oct 31 01:13:45.662524 systemd-logind[1300]: Session 3 logged out. Waiting for processes to exit. Oct 31 01:13:45.662541 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 01:13:45.663715 systemd-logind[1300]: Removed session 3. Oct 31 01:13:45.690868 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 46908 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.691928 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.695280 systemd-logind[1300]: New session 4 of user core. Oct 31 01:13:45.696036 systemd[1]: Started session-4.scope. Oct 31 01:13:45.750907 sshd[1421]: pam_unix(sshd:session): session closed for user core Oct 31 01:13:45.753240 systemd[1]: Started sshd@4-10.0.0.95:22-10.0.0.1:46924.service. Oct 31 01:13:45.753741 systemd[1]: sshd@3-10.0.0.95:22-10.0.0.1:46908.service: Deactivated successfully. Oct 31 01:13:45.754562 systemd-logind[1300]: Session 4 logged out. Waiting for processes to exit. Oct 31 01:13:45.754633 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 01:13:45.755493 systemd-logind[1300]: Removed session 4. Oct 31 01:13:45.782842 sshd[1427]: Accepted publickey for core from 10.0.0.1 port 46924 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.784031 sshd[1427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.788140 systemd-logind[1300]: New session 5 of user core. Oct 31 01:13:45.788953 systemd[1]: Started session-5.scope. Oct 31 01:13:45.844520 sudo[1433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 01:13:45.844721 sudo[1433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:13:45.853839 dbus-daemon[1284]: \xd0\u000d\xd7j\u0005V: received setenforce notice (enforcing=-1376532064) Oct 31 01:13:45.856026 sudo[1433]: pam_unix(sudo:session): session closed for user root Oct 31 01:13:45.857676 sshd[1427]: pam_unix(sshd:session): session closed for user core Oct 31 01:13:45.860134 systemd[1]: Started sshd@5-10.0.0.95:22-10.0.0.1:46932.service. Oct 31 01:13:45.860580 systemd[1]: sshd@4-10.0.0.95:22-10.0.0.1:46924.service: Deactivated successfully. Oct 31 01:13:45.861465 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 01:13:45.861494 systemd-logind[1300]: Session 5 logged out. Waiting for processes to exit. Oct 31 01:13:45.862407 systemd-logind[1300]: Removed session 5. Oct 31 01:13:45.890226 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 46932 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:45.891335 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:45.894902 systemd-logind[1300]: New session 6 of user core. Oct 31 01:13:45.895656 systemd[1]: Started session-6.scope. Oct 31 01:13:45.948297 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 01:13:45.948503 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:13:45.951418 sudo[1442]: pam_unix(sudo:session): session closed for user root Oct 31 01:13:45.955836 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 01:13:45.956037 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:13:45.964662 systemd[1]: Stopping audit-rules.service... Oct 31 01:13:45.964000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:13:45.966110 auditctl[1445]: No rules Oct 31 01:13:45.966465 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:13:45.966678 systemd[1]: Stopped audit-rules.service. Oct 31 01:13:45.967396 kernel: kauditd_printk_skb: 228 callbacks suppressed Oct 31 01:13:45.967439 kernel: audit: type=1305 audit(1761873225.964:150): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 01:13:45.968096 systemd[1]: Starting audit-rules.service... Oct 31 01:13:45.964000 audit[1445]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea2aadd90 a2=420 a3=0 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:45.978416 kernel: audit: type=1300 audit(1761873225.964:150): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea2aadd90 a2=420 a3=0 items=0 ppid=1 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:45.978463 kernel: audit: type=1327 audit(1761873225.964:150): proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:13:45.964000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 31 01:13:45.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.983888 augenrules[1463]: No rules Oct 31 01:13:45.984486 systemd[1]: Finished audit-rules.service. Oct 31 01:13:45.985834 sudo[1441]: pam_unix(sudo:session): session closed for user root Oct 31 01:13:45.986130 kernel: audit: type=1131 audit(1761873225.965:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.986164 kernel: audit: type=1130 audit(1761873225.983:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.987142 sshd[1435]: pam_unix(sshd:session): session closed for user core Oct 31 01:13:45.989435 systemd[1]: Started sshd@6-10.0.0.95:22-10.0.0.1:46942.service. Oct 31 01:13:45.990013 systemd[1]: sshd@5-10.0.0.95:22-10.0.0.1:46932.service: Deactivated successfully. Oct 31 01:13:45.991032 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 01:13:45.991676 systemd-logind[1300]: Session 6 logged out. Waiting for processes to exit. Oct 31 01:13:45.984000 audit[1441]: USER_END pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.992666 systemd-logind[1300]: Removed session 6. Oct 31 01:13:45.997861 kernel: audit: type=1106 audit(1761873225.984:153): pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.997900 kernel: audit: type=1104 audit(1761873225.984:154): pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.984000 audit[1441]: CRED_DISP pid=1441 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.987000 audit[1435]: USER_END pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.011221 kernel: audit: type=1106 audit(1761873225.987:155): pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.011266 kernel: audit: type=1104 audit(1761873225.987:156): pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:45.987000 audit[1435]: CRED_DISP pid=1435 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:45.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:46942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:46.023078 kernel: audit: type=1130 audit(1761873225.988:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:46942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:45.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.95:22-10.0.0.1:46932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:46.024000 audit[1468]: USER_ACCT pid=1468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.025398 sshd[1468]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:13:46.025000 audit[1468]: CRED_ACQ pid=1468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.025000 audit[1468]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdb738010 a2=3 a3=0 items=0 ppid=1 pid=1468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.025000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:13:46.026345 sshd[1468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:13:46.030009 systemd-logind[1300]: New session 7 of user core. Oct 31 01:13:46.030770 systemd[1]: Started session-7.scope. Oct 31 01:13:46.034000 audit[1468]: USER_START pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.035000 audit[1473]: CRED_ACQ pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:13:46.084000 audit[1474]: USER_ACCT pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:46.085175 sudo[1474]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 01:13:46.084000 audit[1474]: CRED_REFR pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:46.085376 sudo[1474]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 01:13:46.086000 audit[1474]: USER_START pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:13:46.115198 systemd[1]: Starting docker.service... Oct 31 01:13:46.187041 env[1486]: time="2025-10-31T01:13:46.186969294Z" level=info msg="Starting up" Oct 31 01:13:46.188842 env[1486]: time="2025-10-31T01:13:46.188788353Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:13:46.188842 env[1486]: time="2025-10-31T01:13:46.188821750Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:13:46.188931 env[1486]: time="2025-10-31T01:13:46.188845403Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:13:46.188931 env[1486]: time="2025-10-31T01:13:46.188855358Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:13:46.191061 env[1486]: time="2025-10-31T01:13:46.191007764Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 01:13:46.191061 env[1486]: time="2025-10-31T01:13:46.191039049Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 01:13:46.191061 env[1486]: time="2025-10-31T01:13:46.191060861Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 01:13:46.191061 env[1486]: time="2025-10-31T01:13:46.191074086Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 01:13:46.872572 env[1486]: time="2025-10-31T01:13:46.872518570Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 01:13:46.872572 env[1486]: time="2025-10-31T01:13:46.872549302Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 01:13:46.872824 env[1486]: time="2025-10-31T01:13:46.872745601Z" level=info msg="Loading containers: start." Oct 31 01:13:46.950000 audit[1521]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.950000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffead06ae80 a2=0 a3=7ffead06ae6c items=0 ppid=1486 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.950000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 31 01:13:46.951000 audit[1523]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.951000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe27bab0e0 a2=0 a3=7ffe27bab0cc items=0 ppid=1486 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.951000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 31 01:13:46.952000 audit[1525]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.952000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeb7d9a6c0 a2=0 a3=7ffeb7d9a6ac items=0 ppid=1486 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.952000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:13:46.953000 audit[1527]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.953000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc0b9a8bc0 a2=0 a3=7ffc0b9a8bac items=0 ppid=1486 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.953000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:13:46.955000 audit[1529]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.955000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb494b990 a2=0 a3=7ffdb494b97c items=0 ppid=1486 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 31 01:13:46.981000 audit[1534]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.981000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc43da0380 a2=0 a3=7ffc43da036c items=0 ppid=1486 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.981000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 31 01:13:46.990000 audit[1536]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.990000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe225b64a0 a2=0 a3=7ffe225b648c items=0 ppid=1486 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.990000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 31 01:13:46.992000 audit[1538]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.992000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcdb3d5870 a2=0 a3=7ffcdb3d585c items=0 ppid=1486 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.992000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 31 01:13:46.993000 audit[1540]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:46.993000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff7f6e63f0 a2=0 a3=7fff7f6e63dc items=0 ppid=1486 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:46.993000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:13:47.003000 audit[1544]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.003000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd7de43a30 a2=0 a3=7ffd7de43a1c items=0 ppid=1486 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.003000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:13:47.007000 audit[1545]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.007000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe2ae2f920 a2=0 a3=7ffe2ae2f90c items=0 ppid=1486 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.007000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:13:47.018653 kernel: Initializing XFRM netlink socket Oct 31 01:13:47.329361 env[1486]: time="2025-10-31T01:13:47.329228887Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 01:13:47.344000 audit[1553]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.344000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd0d5543f0 a2=0 a3=7ffd0d5543dc items=0 ppid=1486 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 31 01:13:47.365000 audit[1556]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.365000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fffdeae0e70 a2=0 a3=7fffdeae0e5c items=0 ppid=1486 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 31 01:13:47.369000 audit[1559]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.369000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff9106a680 a2=0 a3=7fff9106a66c items=0 ppid=1486 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 31 01:13:47.371000 audit[1561]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.371000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd0d088440 a2=0 a3=7ffd0d08842c items=0 ppid=1486 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.371000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 31 01:13:47.372000 audit[1563]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.372000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff4bc95a10 a2=0 a3=7fff4bc959fc items=0 ppid=1486 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.372000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 31 01:13:47.374000 audit[1565]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.374000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff349f5410 a2=0 a3=7fff349f53fc items=0 ppid=1486 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.374000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 31 01:13:47.376000 audit[1567]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.376000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffccefa8bc0 a2=0 a3=7ffccefa8bac items=0 ppid=1486 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.376000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 31 01:13:47.383000 audit[1570]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.383000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc4d085a40 a2=0 a3=7ffc4d085a2c items=0 ppid=1486 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.383000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 31 01:13:47.384000 audit[1572]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.384000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffebd500b40 a2=0 a3=7ffebd500b2c items=0 ppid=1486 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.384000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 01:13:47.386000 audit[1574]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.386000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd04949f50 a2=0 a3=7ffd04949f3c items=0 ppid=1486 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.386000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 01:13:47.388000 audit[1576]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.388000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffddf4e2560 a2=0 a3=7ffddf4e254c items=0 ppid=1486 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 31 01:13:47.389476 systemd-networkd[1085]: docker0: Link UP Oct 31 01:13:47.399000 audit[1580]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.399000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7ca960b0 a2=0 a3=7ffc7ca9609c items=0 ppid=1486 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:13:47.406000 audit[1581]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:13:47.406000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc85bc6840 a2=0 a3=7ffc85bc682c items=0 ppid=1486 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:13:47.406000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 01:13:47.408000 env[1486]: time="2025-10-31T01:13:47.407961266Z" level=info msg="Loading containers: done." Oct 31 01:13:47.423100 env[1486]: time="2025-10-31T01:13:47.423036001Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 01:13:47.423308 env[1486]: time="2025-10-31T01:13:47.423283000Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 01:13:47.423426 env[1486]: time="2025-10-31T01:13:47.423400524Z" level=info msg="Daemon has completed initialization" Oct 31 01:13:47.441066 systemd[1]: Started docker.service. Oct 31 01:13:47.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:47.448244 env[1486]: time="2025-10-31T01:13:47.448160158Z" level=info msg="API listen on /run/docker.sock" Oct 31 01:13:48.146243 env[1316]: time="2025-10-31T01:13:48.146197137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 01:13:48.670996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804908786.mount: Deactivated successfully. Oct 31 01:13:50.150256 env[1316]: time="2025-10-31T01:13:50.150198118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:50.151967 env[1316]: time="2025-10-31T01:13:50.151924261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:50.154043 env[1316]: time="2025-10-31T01:13:50.153999090Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:50.155679 env[1316]: time="2025-10-31T01:13:50.155649275Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:50.156277 env[1316]: time="2025-10-31T01:13:50.156233236Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 01:13:50.156830 env[1316]: time="2025-10-31T01:13:50.156798463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 01:13:51.658064 env[1316]: time="2025-10-31T01:13:51.657985211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:51.664857 env[1316]: time="2025-10-31T01:13:51.664815205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:51.667384 env[1316]: time="2025-10-31T01:13:51.667332626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:51.669602 env[1316]: time="2025-10-31T01:13:51.669562546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:51.670309 env[1316]: time="2025-10-31T01:13:51.670276251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 01:13:51.670976 env[1316]: time="2025-10-31T01:13:51.670934144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 01:13:53.415195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 01:13:53.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.415412 systemd[1]: Stopped kubelet.service. Oct 31 01:13:53.416865 kernel: kauditd_printk_skb: 84 callbacks suppressed Oct 31 01:13:53.416918 kernel: audit: type=1130 audit(1761873233.413:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.416968 systemd[1]: Starting kubelet.service... Oct 31 01:13:53.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.427928 kernel: audit: type=1131 audit(1761873233.413:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.515309 systemd[1]: Started kubelet.service. Oct 31 01:13:53.521649 kernel: audit: type=1130 audit(1761873233.513:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:13:53.713917 kubelet[1627]: E1031 01:13:53.713735 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:13:53.716975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:13:53.717153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:13:53.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:13:53.723646 kernel: audit: type=1131 audit(1761873233.716:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:13:53.854635 env[1316]: time="2025-10-31T01:13:53.854544459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:53.856901 env[1316]: time="2025-10-31T01:13:53.856851710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:53.859237 env[1316]: time="2025-10-31T01:13:53.859199565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:53.860753 env[1316]: time="2025-10-31T01:13:53.860697461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:53.861380 env[1316]: time="2025-10-31T01:13:53.861341523Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 01:13:53.861891 env[1316]: time="2025-10-31T01:13:53.861823614Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 01:13:55.056949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934760288.mount: Deactivated successfully. Oct 31 01:13:56.631717 env[1316]: time="2025-10-31T01:13:56.631636045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:56.634045 env[1316]: time="2025-10-31T01:13:56.633982832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:56.635554 env[1316]: time="2025-10-31T01:13:56.635502736Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:56.637264 env[1316]: time="2025-10-31T01:13:56.637226714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:56.637787 env[1316]: time="2025-10-31T01:13:56.637751563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 01:13:56.638161 env[1316]: time="2025-10-31T01:13:56.638141127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 01:13:57.180247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3099149545.mount: Deactivated successfully. Oct 31 01:13:58.711124 env[1316]: time="2025-10-31T01:13:58.711060240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:58.712935 env[1316]: time="2025-10-31T01:13:58.712902111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:58.714881 env[1316]: time="2025-10-31T01:13:58.714858080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:58.716654 env[1316]: time="2025-10-31T01:13:58.716602044Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:58.717311 env[1316]: time="2025-10-31T01:13:58.717276285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 01:13:58.717828 env[1316]: time="2025-10-31T01:13:58.717797635Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 01:13:59.195452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657336486.mount: Deactivated successfully. Oct 31 01:13:59.200874 env[1316]: time="2025-10-31T01:13:59.200822344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:59.202549 env[1316]: time="2025-10-31T01:13:59.202518302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:59.204145 env[1316]: time="2025-10-31T01:13:59.204102827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:59.205355 env[1316]: time="2025-10-31T01:13:59.205323009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:13:59.205860 env[1316]: time="2025-10-31T01:13:59.205824480Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 01:13:59.206324 env[1316]: time="2025-10-31T01:13:59.206299443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 01:13:59.874373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469991878.mount: Deactivated successfully. Oct 31 01:14:03.338488 env[1316]: time="2025-10-31T01:14:03.338392842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:03.340426 env[1316]: time="2025-10-31T01:14:03.340362086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:03.342188 env[1316]: time="2025-10-31T01:14:03.342134463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:03.344021 env[1316]: time="2025-10-31T01:14:03.343971919Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:03.345007 env[1316]: time="2025-10-31T01:14:03.344921557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 01:14:03.824167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 01:14:03.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.824414 systemd[1]: Stopped kubelet.service. Oct 31 01:14:03.826176 systemd[1]: Starting kubelet.service... Oct 31 01:14:03.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.834842 kernel: audit: type=1130 audit(1761873243.823:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.834921 kernel: audit: type=1131 audit(1761873243.823:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.921760 systemd[1]: Started kubelet.service. Oct 31 01:14:03.927631 kernel: audit: type=1130 audit(1761873243.921:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:03.965925 kubelet[1664]: E1031 01:14:03.965867 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:14:03.967773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:14:03.967917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:14:03.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:14:03.973661 kernel: audit: type=1131 audit(1761873243.967:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 01:14:05.428111 systemd[1]: Stopped kubelet.service. Oct 31 01:14:05.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:05.430321 systemd[1]: Starting kubelet.service... Oct 31 01:14:05.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:05.439843 kernel: audit: type=1130 audit(1761873245.426:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:05.439911 kernel: audit: type=1131 audit(1761873245.426:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:05.459466 systemd[1]: Reloading. Oct 31 01:14:05.519406 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2025-10-31T01:14:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:14:05.519438 /usr/lib/systemd/system-generators/torcx-generator[1701]: time="2025-10-31T01:14:05Z" level=info msg="torcx already run" Oct 31 01:14:06.316327 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:14:06.316347 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:14:06.335663 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:14:06.403476 systemd[1]: Started kubelet.service. Oct 31 01:14:06.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.407643 systemd[1]: Stopping kubelet.service... Oct 31 01:14:06.410621 kernel: audit: type=1130 audit(1761873246.403:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.410644 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:14:06.410874 systemd[1]: Stopped kubelet.service. Oct 31 01:14:06.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.412382 systemd[1]: Starting kubelet.service... Oct 31 01:14:06.417636 kernel: audit: type=1131 audit(1761873246.410:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.498605 systemd[1]: Started kubelet.service. Oct 31 01:14:06.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.508635 kernel: audit: type=1130 audit(1761873246.499:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:06.543626 kubelet[1765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:14:06.543626 kubelet[1765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:14:06.543626 kubelet[1765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:14:06.544173 kubelet[1765]: I1031 01:14:06.543787 1765 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:14:07.056914 kubelet[1765]: I1031 01:14:07.056867 1765 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:14:07.056914 kubelet[1765]: I1031 01:14:07.056897 1765 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:14:07.057183 kubelet[1765]: I1031 01:14:07.057160 1765 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:14:07.082157 kubelet[1765]: E1031 01:14:07.082120 1765 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:07.083086 kubelet[1765]: I1031 01:14:07.083047 1765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:14:07.090637 kubelet[1765]: E1031 01:14:07.090588 1765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:14:07.090637 kubelet[1765]: I1031 01:14:07.090633 1765 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:14:07.094897 kubelet[1765]: I1031 01:14:07.094873 1765 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:14:07.096138 kubelet[1765]: I1031 01:14:07.096100 1765 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:14:07.096345 kubelet[1765]: I1031 01:14:07.096137 1765 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:14:07.096456 kubelet[1765]: I1031 01:14:07.096356 1765 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:14:07.096456 kubelet[1765]: I1031 01:14:07.096374 1765 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:14:07.096537 kubelet[1765]: I1031 01:14:07.096515 1765 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:14:07.098977 kubelet[1765]: I1031 01:14:07.098956 1765 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:14:07.099014 kubelet[1765]: I1031 01:14:07.098999 1765 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:14:07.099041 kubelet[1765]: I1031 01:14:07.099031 1765 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:14:07.099093 kubelet[1765]: I1031 01:14:07.099073 1765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:14:07.111562 kubelet[1765]: I1031 01:14:07.111538 1765 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:14:07.111952 kubelet[1765]: I1031 01:14:07.111935 1765 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:14:07.117707 kubelet[1765]: W1031 01:14:07.117655 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:07.117824 kubelet[1765]: E1031 01:14:07.117713 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:07.117924 kubelet[1765]: W1031 01:14:07.117906 1765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 01:14:07.117997 kubelet[1765]: W1031 01:14:07.117962 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:07.118025 kubelet[1765]: E1031 01:14:07.118005 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:07.125540 kubelet[1765]: I1031 01:14:07.125517 1765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:14:07.126521 kubelet[1765]: I1031 01:14:07.126007 1765 server.go:1287] "Started kubelet" Oct 31 01:14:07.126521 kubelet[1765]: I1031 01:14:07.126093 1765 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:14:07.127192 kubelet[1765]: I1031 01:14:07.127165 1765 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:14:07.134136 kubelet[1765]: I1031 01:14:07.134071 1765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:14:07.134386 kubelet[1765]: I1031 01:14:07.134357 1765 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:14:07.135000 audit[1765]: AVC avc: denied { mac_admin } for pid=1765 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:07.135916 kubelet[1765]: I1031 01:14:07.135766 1765 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:14:07.135916 kubelet[1765]: I1031 01:14:07.135798 1765 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:14:07.135916 kubelet[1765]: I1031 01:14:07.135858 1765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:14:07.136768 kubelet[1765]: I1031 01:14:07.136392 1765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:14:07.138427 kubelet[1765]: E1031 01:14:07.136120 1765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.95:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.95:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18736e622a2e8b6f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 01:14:07.125539695 +0000 UTC m=+0.622976564,LastTimestamp:2025-10-31 01:14:07.125539695 +0000 UTC m=+0.622976564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 01:14:07.138427 kubelet[1765]: I1031 01:14:07.138117 1765 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:14:07.138427 kubelet[1765]: E1031 01:14:07.138318 1765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:14:07.138666 kubelet[1765]: I1031 01:14:07.138648 1765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:14:07.138719 kubelet[1765]: I1031 01:14:07.138713 1765 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:14:07.139044 kubelet[1765]: E1031 01:14:07.139011 1765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="200ms" Oct 31 01:14:07.139366 kubelet[1765]: W1031 01:14:07.139024 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:07.139478 kubelet[1765]: E1031 01:14:07.139454 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:07.141274 kubelet[1765]: I1031 01:14:07.141248 1765 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:14:07.141359 kubelet[1765]: I1031 01:14:07.141328 1765 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:14:07.135000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:07.135000 audit[1765]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000275bc0 a1=c00013b9b0 a2=c000275b90 a3=25 items=0 ppid=1 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:07.135000 audit[1765]: AVC avc: denied { mac_admin } for pid=1765 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:07.135000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:07.135000 audit[1765]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00048be40 a1=c00013b9c8 a2=c000275c50 a3=25 items=0 ppid=1 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.135000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:07.139000 audit[1778]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1778 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.139000 audit[1778]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd0ca4e3c0 a2=0 a3=7ffd0ca4e3ac items=0 ppid=1765 pid=1778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:14:07.140000 audit[1780]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1780 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.140000 audit[1780]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff984ff0f0 a2=0 a3=7fff984ff0dc items=0 ppid=1765 pid=1780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:14:07.142200 kubelet[1765]: I1031 01:14:07.142118 1765 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:14:07.142200 kubelet[1765]: E1031 01:14:07.142157 1765 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:14:07.142635 kernel: audit: type=1400 audit(1761873247.135:205): avc: denied { mac_admin } for pid=1765 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:07.144000 audit[1782]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.144000 audit[1782]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffea0cab2a0 a2=0 a3=7ffea0cab28c items=0 ppid=1765 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:14:07.146000 audit[1784]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1784 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.146000 audit[1784]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb0320c90 a2=0 a3=7ffcb0320c7c items=0 ppid=1765 pid=1784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.146000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:14:07.151000 audit[1787]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1787 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.151000 audit[1787]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffef4c74430 a2=0 a3=7ffef4c7441c items=0 ppid=1765 pid=1787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.151000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 31 01:14:07.152391 kubelet[1765]: I1031 01:14:07.152359 1765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:14:07.152000 audit[1788]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:07.152000 audit[1788]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff24ae00a0 a2=0 a3=7fff24ae008c items=0 ppid=1765 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 01:14:07.153296 kubelet[1765]: I1031 01:14:07.153274 1765 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:14:07.153355 kubelet[1765]: I1031 01:14:07.153301 1765 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:14:07.153355 kubelet[1765]: I1031 01:14:07.153327 1765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:14:07.153355 kubelet[1765]: I1031 01:14:07.153336 1765 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:14:07.153432 kubelet[1765]: E1031 01:14:07.153396 1765 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:14:07.153000 audit[1791]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.153000 audit[1791]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffe9bc97f0 a2=0 a3=7fffe9bc97dc items=0 ppid=1765 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.153000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:14:07.154731 kubelet[1765]: W1031 01:14:07.154705 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:07.154813 kubelet[1765]: E1031 01:14:07.154753 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:07.154000 audit[1792]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1792 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:07.154000 audit[1792]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd60a52d70 a2=0 a3=7ffd60a52d5c items=0 ppid=1765 pid=1792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 01:14:07.154000 audit[1793]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.154000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1a92b310 a2=0 a3=7ffe1a92b2fc items=0 ppid=1765 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:14:07.155000 audit[1795]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:07.155000 audit[1795]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe0caace80 a2=0 a3=7ffe0caace6c items=0 ppid=1765 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 01:14:07.155000 audit[1796]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:07.155000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1dacf6d0 a2=0 a3=7fff1dacf6bc items=0 ppid=1765 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.155000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:14:07.156000 audit[1798]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:07.156000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef2200010 a2=0 a3=7ffef21ffffc items=0 ppid=1765 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 01:14:07.159682 kubelet[1765]: I1031 01:14:07.159664 1765 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:14:07.159682 kubelet[1765]: I1031 01:14:07.159678 1765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:14:07.159772 kubelet[1765]: I1031 01:14:07.159697 1765 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:14:07.239221 kubelet[1765]: E1031 01:14:07.239191 1765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:14:07.254429 kubelet[1765]: E1031 01:14:07.254383 1765 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:14:07.339652 kubelet[1765]: E1031 01:14:07.339569 1765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:14:07.340665 kubelet[1765]: E1031 01:14:07.340027 1765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="400ms" Oct 31 01:14:07.440359 kubelet[1765]: E1031 01:14:07.440294 1765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 01:14:07.455534 kubelet[1765]: E1031 01:14:07.455496 1765 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 01:14:07.459585 kubelet[1765]: I1031 01:14:07.459565 1765 policy_none.go:49] "None policy: Start" Oct 31 01:14:07.459635 kubelet[1765]: I1031 01:14:07.459592 1765 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:14:07.459635 kubelet[1765]: I1031 01:14:07.459632 1765 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:14:07.466117 kubelet[1765]: I1031 01:14:07.466089 1765 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:14:07.465000 audit[1765]: AVC avc: denied { mac_admin } for pid=1765 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:07.465000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:07.465000 audit[1765]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00109f560 a1=c001094c30 a2=c00109f530 a3=25 items=0 ppid=1 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:07.465000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:07.466310 kubelet[1765]: I1031 01:14:07.466162 1765 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:14:07.466310 kubelet[1765]: I1031 01:14:07.466260 1765 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:14:07.466310 kubelet[1765]: I1031 01:14:07.466274 1765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:14:07.466964 kubelet[1765]: I1031 01:14:07.466900 1765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:14:07.467510 kubelet[1765]: E1031 01:14:07.467492 1765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:14:07.467558 kubelet[1765]: E1031 01:14:07.467542 1765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 01:14:07.567751 kubelet[1765]: I1031 01:14:07.567702 1765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:07.568159 kubelet[1765]: E1031 01:14:07.568052 1765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 31 01:14:07.740935 kubelet[1765]: E1031 01:14:07.740879 1765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="800ms" Oct 31 01:14:07.770045 kubelet[1765]: I1031 01:14:07.770008 1765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:07.770350 kubelet[1765]: E1031 01:14:07.770323 1765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 31 01:14:07.860449 kubelet[1765]: E1031 01:14:07.860409 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:07.862035 kubelet[1765]: E1031 01:14:07.862000 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:07.862658 kubelet[1765]: E1031 01:14:07.862640 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:07.943456 kubelet[1765]: I1031 01:14:07.943412 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:07.943456 kubelet[1765]: I1031 01:14:07.943443 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:07.943640 kubelet[1765]: I1031 01:14:07.943463 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:07.943640 kubelet[1765]: I1031 01:14:07.943481 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:07.943640 kubelet[1765]: I1031 01:14:07.943495 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:07.943640 kubelet[1765]: I1031 01:14:07.943510 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:07.943640 kubelet[1765]: I1031 01:14:07.943565 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:07.943805 kubelet[1765]: I1031 01:14:07.943595 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:07.943805 kubelet[1765]: I1031 01:14:07.943632 1765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:07.993043 kubelet[1765]: W1031 01:14:07.992949 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:07.993043 kubelet[1765]: E1031 01:14:07.992998 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:08.161250 kubelet[1765]: E1031 01:14:08.161217 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.161759 env[1316]: time="2025-10-31T01:14:08.161724090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a51a1e02af54ef4524ca2d4fc10bc356,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:08.162904 kubelet[1765]: E1031 01:14:08.162868 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.162964 kubelet[1765]: E1031 01:14:08.162872 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.163214 env[1316]: time="2025-10-31T01:14:08.163165248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:08.163373 env[1316]: time="2025-10-31T01:14:08.163254302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:08.171382 kubelet[1765]: I1031 01:14:08.171357 1765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:08.171692 kubelet[1765]: E1031 01:14:08.171667 1765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 31 01:14:08.239163 kubelet[1765]: W1031 01:14:08.239092 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:08.239210 kubelet[1765]: E1031 01:14:08.239155 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:08.247802 kubelet[1765]: W1031 01:14:08.247720 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:08.247802 kubelet[1765]: E1031 01:14:08.247756 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:08.399978 kubelet[1765]: W1031 01:14:08.399907 1765 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.95:6443: connect: connection refused Oct 31 01:14:08.400103 kubelet[1765]: E1031 01:14:08.399980 1765 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.95:6443: connect: connection refused" logger="UnhandledError" Oct 31 01:14:08.541544 kubelet[1765]: E1031 01:14:08.541384 1765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.95:6443: connect: connection refused" interval="1.6s" Oct 31 01:14:08.844025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642437627.mount: Deactivated successfully. Oct 31 01:14:08.848672 env[1316]: time="2025-10-31T01:14:08.848631528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.851195 env[1316]: time="2025-10-31T01:14:08.851163819Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.852809 env[1316]: time="2025-10-31T01:14:08.852781964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.853768 env[1316]: time="2025-10-31T01:14:08.853734700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.855505 env[1316]: time="2025-10-31T01:14:08.855439475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.856642 env[1316]: time="2025-10-31T01:14:08.856600642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.857783 env[1316]: time="2025-10-31T01:14:08.857758794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.859073 env[1316]: time="2025-10-31T01:14:08.859047857Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.860998 env[1316]: time="2025-10-31T01:14:08.860974782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.861599 env[1316]: time="2025-10-31T01:14:08.861574265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.862756 env[1316]: time="2025-10-31T01:14:08.862729681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.864727 env[1316]: time="2025-10-31T01:14:08.864668130Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:08.877964 env[1316]: time="2025-10-31T01:14:08.877907501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:08.878130 env[1316]: time="2025-10-31T01:14:08.877940210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:08.878130 env[1316]: time="2025-10-31T01:14:08.877949118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:08.878130 env[1316]: time="2025-10-31T01:14:08.878090601Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10c16ee7db5b9c6d5df2fdc419892c262a23446831a67f7538c2619ca086f0f6 pid=1808 runtime=io.containerd.runc.v2 Oct 31 01:14:08.894061 env[1316]: time="2025-10-31T01:14:08.893991070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:08.894061 env[1316]: time="2025-10-31T01:14:08.894072900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:08.894283 env[1316]: time="2025-10-31T01:14:08.894180613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:08.894363 env[1316]: time="2025-10-31T01:14:08.894331787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0b2cdb2096d590c6f0a0e23355eabe969d8396ade1952bf61eca73275e5dde4 pid=1839 runtime=io.containerd.runc.v2 Oct 31 01:14:08.894917 env[1316]: time="2025-10-31T01:14:08.894841503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:08.894917 env[1316]: time="2025-10-31T01:14:08.894881236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:08.894917 env[1316]: time="2025-10-31T01:14:08.894891387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:08.900734 env[1316]: time="2025-10-31T01:14:08.895207883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4ff44f413b57df82bf66f9c385e92aea51317535405d18dfbc7cb951ce0a898c pid=1845 runtime=io.containerd.runc.v2 Oct 31 01:14:08.938997 env[1316]: time="2025-10-31T01:14:08.938947917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"10c16ee7db5b9c6d5df2fdc419892c262a23446831a67f7538c2619ca086f0f6\"" Oct 31 01:14:08.939937 kubelet[1765]: E1031 01:14:08.939912 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.944937 env[1316]: time="2025-10-31T01:14:08.944902473Z" level=info msg="CreateContainer within sandbox \"10c16ee7db5b9c6d5df2fdc419892c262a23446831a67f7538c2619ca086f0f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 01:14:08.947637 env[1316]: time="2025-10-31T01:14:08.946657142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff44f413b57df82bf66f9c385e92aea51317535405d18dfbc7cb951ce0a898c\"" Oct 31 01:14:08.948297 env[1316]: time="2025-10-31T01:14:08.948273995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a51a1e02af54ef4524ca2d4fc10bc356,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0b2cdb2096d590c6f0a0e23355eabe969d8396ade1952bf61eca73275e5dde4\"" Oct 31 01:14:08.948897 kubelet[1765]: E1031 01:14:08.948859 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.949069 kubelet[1765]: E1031 01:14:08.949057 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:08.950243 env[1316]: time="2025-10-31T01:14:08.950218577Z" level=info msg="CreateContainer within sandbox \"4ff44f413b57df82bf66f9c385e92aea51317535405d18dfbc7cb951ce0a898c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 01:14:08.950442 env[1316]: time="2025-10-31T01:14:08.950418430Z" level=info msg="CreateContainer within sandbox \"a0b2cdb2096d590c6f0a0e23355eabe969d8396ade1952bf61eca73275e5dde4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 01:14:08.959631 env[1316]: time="2025-10-31T01:14:08.959550115Z" level=info msg="CreateContainer within sandbox \"10c16ee7db5b9c6d5df2fdc419892c262a23446831a67f7538c2619ca086f0f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"17b0d2ef9a05e0b4cee68f3ca2339d6d697a76ca0623fa2426024035d8a39899\"" Oct 31 01:14:08.960100 env[1316]: time="2025-10-31T01:14:08.960078100Z" level=info msg="StartContainer for \"17b0d2ef9a05e0b4cee68f3ca2339d6d697a76ca0623fa2426024035d8a39899\"" Oct 31 01:14:08.972905 kubelet[1765]: I1031 01:14:08.972546 1765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:08.972905 kubelet[1765]: E1031 01:14:08.972874 1765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.95:6443/api/v1/nodes\": dial tcp 10.0.0.95:6443: connect: connection refused" node="localhost" Oct 31 01:14:08.973160 env[1316]: time="2025-10-31T01:14:08.973127347Z" level=info msg="CreateContainer within sandbox \"4ff44f413b57df82bf66f9c385e92aea51317535405d18dfbc7cb951ce0a898c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5abee7788d18f00695d89e8ccdcf9b8eb4b55e9bc559de24fc2cb092cd8e2c09\"" Oct 31 01:14:08.973498 env[1316]: time="2025-10-31T01:14:08.973476501Z" level=info msg="StartContainer for \"5abee7788d18f00695d89e8ccdcf9b8eb4b55e9bc559de24fc2cb092cd8e2c09\"" Oct 31 01:14:08.976343 env[1316]: time="2025-10-31T01:14:08.976304905Z" level=info msg="CreateContainer within sandbox \"a0b2cdb2096d590c6f0a0e23355eabe969d8396ade1952bf61eca73275e5dde4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12d25b239eb4af1a96b485b71bccb745f803658dc2ad40bc6cd56731614095b0\"" Oct 31 01:14:08.976690 env[1316]: time="2025-10-31T01:14:08.976604367Z" level=info msg="StartContainer for \"12d25b239eb4af1a96b485b71bccb745f803658dc2ad40bc6cd56731614095b0\"" Oct 31 01:14:09.023070 env[1316]: time="2025-10-31T01:14:09.022318987Z" level=info msg="StartContainer for \"17b0d2ef9a05e0b4cee68f3ca2339d6d697a76ca0623fa2426024035d8a39899\" returns successfully" Oct 31 01:14:09.030427 env[1316]: time="2025-10-31T01:14:09.030378779Z" level=info msg="StartContainer for \"5abee7788d18f00695d89e8ccdcf9b8eb4b55e9bc559de24fc2cb092cd8e2c09\" returns successfully" Oct 31 01:14:09.043671 env[1316]: time="2025-10-31T01:14:09.040131820Z" level=info msg="StartContainer for \"12d25b239eb4af1a96b485b71bccb745f803658dc2ad40bc6cd56731614095b0\" returns successfully" Oct 31 01:14:09.159688 kubelet[1765]: E1031 01:14:09.159560 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:09.159848 kubelet[1765]: E1031 01:14:09.159758 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:09.161318 kubelet[1765]: E1031 01:14:09.161299 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:09.161430 kubelet[1765]: E1031 01:14:09.161407 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:09.163184 kubelet[1765]: E1031 01:14:09.163163 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:09.163295 kubelet[1765]: E1031 01:14:09.163267 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:10.145291 kubelet[1765]: E1031 01:14:10.145240 1765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 01:14:10.164956 kubelet[1765]: E1031 01:14:10.164919 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:10.165123 kubelet[1765]: E1031 01:14:10.165022 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:10.165123 kubelet[1765]: E1031 01:14:10.165037 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:10.165173 kubelet[1765]: E1031 01:14:10.165149 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:10.165280 kubelet[1765]: E1031 01:14:10.165262 1765 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 01:14:10.165360 kubelet[1765]: E1031 01:14:10.165345 1765 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:10.323235 kubelet[1765]: E1031 01:14:10.323194 1765 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Oct 31 01:14:10.574662 kubelet[1765]: I1031 01:14:10.574633 1765 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:10.581265 kubelet[1765]: I1031 01:14:10.581241 1765 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:14:10.581331 kubelet[1765]: E1031 01:14:10.581269 1765 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 31 01:14:10.639117 kubelet[1765]: I1031 01:14:10.639072 1765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:10.756599 kubelet[1765]: E1031 01:14:10.756550 1765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:10.756599 kubelet[1765]: I1031 01:14:10.756584 1765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:10.758024 kubelet[1765]: E1031 01:14:10.757997 1765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:10.758024 kubelet[1765]: I1031 01:14:10.758016 1765 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:10.759595 kubelet[1765]: E1031 01:14:10.759572 1765 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:11.113086 kubelet[1765]: I1031 01:14:11.113047 1765 apiserver.go:52] "Watching apiserver" Oct 31 01:14:11.139627 kubelet[1765]: I1031 01:14:11.139570 1765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:14:12.022572 systemd[1]: Reloading. Oct 31 01:14:12.083389 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-10-31T01:14:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 01:14:12.083418 /usr/lib/systemd/system-generators/torcx-generator[2065]: time="2025-10-31T01:14:12Z" level=info msg="torcx already run" Oct 31 01:14:12.156588 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 01:14:12.156625 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 01:14:12.175973 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:14:12.250225 systemd[1]: Stopping kubelet.service... Oct 31 01:14:12.274969 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:14:12.275298 systemd[1]: Stopped kubelet.service. Oct 31 01:14:12.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:12.277305 kernel: kauditd_printk_skb: 47 callbacks suppressed Oct 31 01:14:12.277352 kernel: audit: type=1131 audit(1761873252.274:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:12.278026 systemd[1]: Starting kubelet.service... Oct 31 01:14:12.377987 systemd[1]: Started kubelet.service. Oct 31 01:14:12.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:12.385639 kernel: audit: type=1130 audit(1761873252.378:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:12.421666 kubelet[2122]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:14:12.421666 kubelet[2122]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:14:12.421666 kubelet[2122]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:14:12.422087 kubelet[2122]: I1031 01:14:12.421734 2122 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:14:12.427604 kubelet[2122]: I1031 01:14:12.427569 2122 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 01:14:12.427604 kubelet[2122]: I1031 01:14:12.427592 2122 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:14:12.427814 kubelet[2122]: I1031 01:14:12.427792 2122 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 01:14:12.428871 kubelet[2122]: I1031 01:14:12.428850 2122 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 01:14:12.431354 kubelet[2122]: I1031 01:14:12.431335 2122 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:14:12.434834 kubelet[2122]: E1031 01:14:12.434804 2122 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:14:12.434834 kubelet[2122]: I1031 01:14:12.434833 2122 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 01:14:12.438603 kubelet[2122]: I1031 01:14:12.438561 2122 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 01:14:12.439178 kubelet[2122]: I1031 01:14:12.439139 2122 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:14:12.439325 kubelet[2122]: I1031 01:14:12.439168 2122 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 01:14:12.439406 kubelet[2122]: I1031 01:14:12.439327 2122 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:14:12.439406 kubelet[2122]: I1031 01:14:12.439336 2122 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 01:14:12.439406 kubelet[2122]: I1031 01:14:12.439378 2122 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:14:12.439497 kubelet[2122]: I1031 01:14:12.439486 2122 kubelet.go:446] "Attempting to sync node with API server" Oct 31 01:14:12.439523 kubelet[2122]: I1031 01:14:12.439507 2122 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:14:12.439523 kubelet[2122]: I1031 01:14:12.439524 2122 kubelet.go:352] "Adding apiserver pod source" Oct 31 01:14:12.439566 kubelet[2122]: I1031 01:14:12.439533 2122 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:14:12.440278 kubelet[2122]: I1031 01:14:12.440251 2122 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 01:14:12.440581 kubelet[2122]: I1031 01:14:12.440565 2122 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 01:14:12.440957 kubelet[2122]: I1031 01:14:12.440941 2122 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 01:14:12.441031 kubelet[2122]: I1031 01:14:12.440967 2122 server.go:1287] "Started kubelet" Oct 31 01:14:12.441000 audit[2122]: AVC avc: denied { mac_admin } for pid=2122 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:12.445042 kubelet[2122]: I1031 01:14:12.444925 2122 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:14:12.445233 kubelet[2122]: I1031 01:14:12.445172 2122 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:14:12.445233 kubelet[2122]: I1031 01:14:12.445227 2122 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:14:12.448340 kubelet[2122]: I1031 01:14:12.448303 2122 server.go:479] "Adding debug handlers to kubelet server" Oct 31 01:14:12.451311 kernel: audit: type=1400 audit(1761873252.441:222): avc: denied { mac_admin } for pid=2122 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:12.451419 kernel: audit: type=1401 audit(1761873252.441:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:12.441000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:12.441000 audit[2122]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b37f80 a1=c0009a7170 a2=c000b37f50 a3=25 items=0 ppid=1 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:12.456896 kubelet[2122]: I1031 01:14:12.452883 2122 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 01:14:12.456896 kubelet[2122]: I1031 01:14:12.452994 2122 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 01:14:12.456896 kubelet[2122]: I1031 01:14:12.453038 2122 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:14:12.456896 kubelet[2122]: I1031 01:14:12.454815 2122 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:14:12.459266 kernel: audit: type=1300 audit(1761873252.441:222): arch=c000003e syscall=188 success=no exit=-22 a0=c000b37f80 a1=c0009a7170 a2=c000b37f50 a3=25 items=0 ppid=1 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:12.459766 kernel: audit: type=1327 audit(1761873252.441:222): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:12.441000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:12.461105 kubelet[2122]: I1031 01:14:12.461085 2122 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 01:14:12.461760 kubelet[2122]: I1031 01:14:12.461744 2122 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 01:14:12.463791 kubelet[2122]: I1031 01:14:12.463261 2122 reconciler.go:26] "Reconciler: start to sync state" Oct 31 01:14:12.465042 kubelet[2122]: E1031 01:14:12.465019 2122 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:14:12.452000 audit[2122]: AVC avc: denied { mac_admin } for pid=2122 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:12.469760 kubelet[2122]: I1031 01:14:12.466923 2122 factory.go:221] Registration of the containerd container factory successfully Oct 31 01:14:12.469760 kubelet[2122]: I1031 01:14:12.466973 2122 factory.go:221] Registration of the systemd container factory successfully Oct 31 01:14:12.469760 kubelet[2122]: I1031 01:14:12.467080 2122 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:14:12.471736 kernel: audit: type=1400 audit(1761873252.452:223): avc: denied { mac_admin } for pid=2122 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:12.474828 kernel: audit: type=1401 audit(1761873252.452:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:12.452000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:12.452000 audit[2122]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000aaa040 a1=c000aa8018 a2=c000ac0060 a3=25 items=0 ppid=1 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:12.482777 kernel: audit: type=1300 audit(1761873252.452:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000aaa040 a1=c000aa8018 a2=c000ac0060 a3=25 items=0 ppid=1 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:12.489736 kernel: audit: type=1327 audit(1761873252.452:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:12.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:12.489826 kubelet[2122]: I1031 01:14:12.484500 2122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 01:14:12.489826 kubelet[2122]: I1031 01:14:12.485471 2122 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 01:14:12.489826 kubelet[2122]: I1031 01:14:12.485486 2122 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 01:14:12.489826 kubelet[2122]: I1031 01:14:12.485505 2122 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:14:12.489826 kubelet[2122]: I1031 01:14:12.485511 2122 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 01:14:12.489826 kubelet[2122]: E1031 01:14:12.485554 2122 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:14:12.509399 kubelet[2122]: I1031 01:14:12.509367 2122 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:14:12.509399 kubelet[2122]: I1031 01:14:12.509389 2122 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:14:12.509399 kubelet[2122]: I1031 01:14:12.509407 2122 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:14:12.509594 kubelet[2122]: I1031 01:14:12.509571 2122 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 01:14:12.509633 kubelet[2122]: I1031 01:14:12.509581 2122 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 01:14:12.509633 kubelet[2122]: I1031 01:14:12.509600 2122 policy_none.go:49] "None policy: Start" Oct 31 01:14:12.509706 kubelet[2122]: I1031 01:14:12.509690 2122 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 01:14:12.509706 kubelet[2122]: I1031 01:14:12.509706 2122 state_mem.go:35] "Initializing new in-memory state store" Oct 31 01:14:12.509909 kubelet[2122]: I1031 01:14:12.509893 2122 state_mem.go:75] "Updated machine memory state" Oct 31 01:14:12.511597 kubelet[2122]: I1031 01:14:12.511581 2122 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 01:14:12.511000 audit[2122]: AVC avc: denied { mac_admin } for pid=2122 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:14:12.511000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 01:14:12.511000 audit[2122]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00115a1b0 a1=c000d27680 a2=c00115a180 a3=25 items=0 ppid=1 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:12.511000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 01:14:12.512037 kubelet[2122]: I1031 01:14:12.512013 2122 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 01:14:12.512245 kubelet[2122]: I1031 01:14:12.512231 2122 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:14:12.512338 kubelet[2122]: I1031 01:14:12.512306 2122 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:14:12.512840 kubelet[2122]: I1031 01:14:12.512829 2122 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:14:12.513231 kubelet[2122]: E1031 01:14:12.513215 2122 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:14:12.587688 kubelet[2122]: I1031 01:14:12.586956 2122 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.587688 kubelet[2122]: I1031 01:14:12.586991 2122 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:12.587688 kubelet[2122]: I1031 01:14:12.587082 2122 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:12.618541 kubelet[2122]: I1031 01:14:12.618509 2122 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 01:14:12.623629 kubelet[2122]: I1031 01:14:12.623598 2122 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 01:14:12.623702 kubelet[2122]: I1031 01:14:12.623688 2122 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 01:14:12.663742 kubelet[2122]: I1031 01:14:12.663697 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:12.663742 kubelet[2122]: I1031 01:14:12.663737 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.663993 kubelet[2122]: I1031 01:14:12.663756 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.663993 kubelet[2122]: I1031 01:14:12.663775 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.663993 kubelet[2122]: I1031 01:14:12.663793 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:12.663993 kubelet[2122]: I1031 01:14:12.663808 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:12.663993 kubelet[2122]: I1031 01:14:12.663838 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a51a1e02af54ef4524ca2d4fc10bc356-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a51a1e02af54ef4524ca2d4fc10bc356\") " pod="kube-system/kube-apiserver-localhost" Oct 31 01:14:12.664108 kubelet[2122]: I1031 01:14:12.663867 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.664108 kubelet[2122]: I1031 01:14:12.663898 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 01:14:12.892062 kubelet[2122]: E1031 01:14:12.891905 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:12.895185 kubelet[2122]: E1031 01:14:12.895137 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:12.895326 kubelet[2122]: E1031 01:14:12.895236 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:13.440583 kubelet[2122]: I1031 01:14:13.440516 2122 apiserver.go:52] "Watching apiserver" Oct 31 01:14:13.462197 kubelet[2122]: I1031 01:14:13.462149 2122 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 01:14:13.494512 kubelet[2122]: E1031 01:14:13.494474 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:13.496653 kubelet[2122]: I1031 01:14:13.495164 2122 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:13.496653 kubelet[2122]: E1031 01:14:13.495595 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:13.500595 kubelet[2122]: E1031 01:14:13.500518 2122 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 01:14:13.500858 kubelet[2122]: E1031 01:14:13.500687 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:13.514826 kubelet[2122]: I1031 01:14:13.514736 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.514714413 podStartE2EDuration="1.514714413s" podCreationTimestamp="2025-10-31 01:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:14:13.514475328 +0000 UTC m=+1.131929091" watchObservedRunningTime="2025-10-31 01:14:13.514714413 +0000 UTC m=+1.132168166" Oct 31 01:14:13.521113 kubelet[2122]: I1031 01:14:13.521015 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.520983485 podStartE2EDuration="1.520983485s" podCreationTimestamp="2025-10-31 01:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:14:13.520768347 +0000 UTC m=+1.138222100" watchObservedRunningTime="2025-10-31 01:14:13.520983485 +0000 UTC m=+1.138437248" Oct 31 01:14:13.528112 kubelet[2122]: I1031 01:14:13.527980 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.527940828 podStartE2EDuration="1.527940828s" podCreationTimestamp="2025-10-31 01:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:14:13.527772853 +0000 UTC m=+1.145226616" watchObservedRunningTime="2025-10-31 01:14:13.527940828 +0000 UTC m=+1.145394591" Oct 31 01:14:14.495750 kubelet[2122]: E1031 01:14:14.495714 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:14.496168 kubelet[2122]: E1031 01:14:14.496121 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:15.496375 kubelet[2122]: E1031 01:14:15.496324 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:18.352458 kubelet[2122]: I1031 01:14:18.352418 2122 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 01:14:18.352995 kubelet[2122]: I1031 01:14:18.352882 2122 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 01:14:18.353038 env[1316]: time="2025-10-31T01:14:18.352730693Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 01:14:19.314631 kubelet[2122]: E1031 01:14:19.314546 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:19.501879 kubelet[2122]: E1031 01:14:19.501850 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.392206 kubelet[2122]: E1031 01:14:20.391449 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.503444 kubelet[2122]: E1031 01:14:20.503379 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.503956 kubelet[2122]: E1031 01:14:20.503924 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.515676 kubelet[2122]: I1031 01:14:20.515592 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/307938ef-9a2d-44b8-a406-ca299241c462-kube-proxy\") pod \"kube-proxy-nmzzs\" (UID: \"307938ef-9a2d-44b8-a406-ca299241c462\") " pod="kube-system/kube-proxy-nmzzs" Oct 31 01:14:20.515676 kubelet[2122]: I1031 01:14:20.515671 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/307938ef-9a2d-44b8-a406-ca299241c462-lib-modules\") pod \"kube-proxy-nmzzs\" (UID: \"307938ef-9a2d-44b8-a406-ca299241c462\") " pod="kube-system/kube-proxy-nmzzs" Oct 31 01:14:20.515676 kubelet[2122]: I1031 01:14:20.515691 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/307938ef-9a2d-44b8-a406-ca299241c462-xtables-lock\") pod \"kube-proxy-nmzzs\" (UID: \"307938ef-9a2d-44b8-a406-ca299241c462\") " pod="kube-system/kube-proxy-nmzzs" Oct 31 01:14:20.515967 kubelet[2122]: I1031 01:14:20.515706 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqgld\" (UniqueName: \"kubernetes.io/projected/307938ef-9a2d-44b8-a406-ca299241c462-kube-api-access-fqgld\") pod \"kube-proxy-nmzzs\" (UID: \"307938ef-9a2d-44b8-a406-ca299241c462\") " pod="kube-system/kube-proxy-nmzzs" Oct 31 01:14:20.616551 kubelet[2122]: I1031 01:14:20.616449 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkcl\" (UniqueName: \"kubernetes.io/projected/915ed83a-b888-406d-a30d-ce13e1698a2c-kube-api-access-9xkcl\") pod \"tigera-operator-7dcd859c48-zll48\" (UID: \"915ed83a-b888-406d-a30d-ce13e1698a2c\") " pod="tigera-operator/tigera-operator-7dcd859c48-zll48" Oct 31 01:14:20.616731 kubelet[2122]: I1031 01:14:20.616677 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/915ed83a-b888-406d-a30d-ce13e1698a2c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-zll48\" (UID: \"915ed83a-b888-406d-a30d-ce13e1698a2c\") " pod="tigera-operator/tigera-operator-7dcd859c48-zll48" Oct 31 01:14:20.621078 kubelet[2122]: I1031 01:14:20.621049 2122 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 01:14:20.810183 kubelet[2122]: E1031 01:14:20.810142 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.810877 env[1316]: time="2025-10-31T01:14:20.810823498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nmzzs,Uid:307938ef-9a2d-44b8-a406-ca299241c462,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:20.816361 env[1316]: time="2025-10-31T01:14:20.816323181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zll48,Uid:915ed83a-b888-406d-a30d-ce13e1698a2c,Namespace:tigera-operator,Attempt:0,}" Oct 31 01:14:20.873578 env[1316]: time="2025-10-31T01:14:20.872453844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:20.873578 env[1316]: time="2025-10-31T01:14:20.872486749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:20.873578 env[1316]: time="2025-10-31T01:14:20.872496307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:20.873578 env[1316]: time="2025-10-31T01:14:20.872655962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e661af6ac2f3c22103a47da897d51e440f8e01d6e0f67789cf7d7ef1d6d0981 pid=2184 runtime=io.containerd.runc.v2 Oct 31 01:14:20.875458 env[1316]: time="2025-10-31T01:14:20.874443966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:20.875458 env[1316]: time="2025-10-31T01:14:20.874471050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:20.875458 env[1316]: time="2025-10-31T01:14:20.874481270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:20.875458 env[1316]: time="2025-10-31T01:14:20.874629491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e492a5a47aeb87e5158a3f1893ef4ce8d32725cc08d8eebad9d3d8653bd46c5 pid=2194 runtime=io.containerd.runc.v2 Oct 31 01:14:20.916419 env[1316]: time="2025-10-31T01:14:20.916380504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nmzzs,Uid:307938ef-9a2d-44b8-a406-ca299241c462,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e492a5a47aeb87e5158a3f1893ef4ce8d32725cc08d8eebad9d3d8653bd46c5\"" Oct 31 01:14:20.916974 kubelet[2122]: E1031 01:14:20.916816 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:20.924637 env[1316]: time="2025-10-31T01:14:20.919844634Z" level=info msg="CreateContainer within sandbox \"4e492a5a47aeb87e5158a3f1893ef4ce8d32725cc08d8eebad9d3d8653bd46c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 01:14:20.945618 env[1316]: time="2025-10-31T01:14:20.945555913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-zll48,Uid:915ed83a-b888-406d-a30d-ce13e1698a2c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e661af6ac2f3c22103a47da897d51e440f8e01d6e0f67789cf7d7ef1d6d0981\"" Oct 31 01:14:20.948916 env[1316]: time="2025-10-31T01:14:20.948870479Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 01:14:20.957005 env[1316]: time="2025-10-31T01:14:20.956954201Z" level=info msg="CreateContainer within sandbox \"4e492a5a47aeb87e5158a3f1893ef4ce8d32725cc08d8eebad9d3d8653bd46c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0949e34754fe3820e73d167704ea645f8b0b89c33e505dba8af49e5d5482eb1e\"" Oct 31 01:14:20.957600 env[1316]: time="2025-10-31T01:14:20.957576435Z" level=info msg="StartContainer for \"0949e34754fe3820e73d167704ea645f8b0b89c33e505dba8af49e5d5482eb1e\"" Oct 31 01:14:20.998832 env[1316]: time="2025-10-31T01:14:20.998782335Z" level=info msg="StartContainer for \"0949e34754fe3820e73d167704ea645f8b0b89c33e505dba8af49e5d5482eb1e\" returns successfully" Oct 31 01:14:21.098000 audit[2326]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.100543 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 31 01:14:21.100602 kernel: audit: type=1325 audit(1761873261.098:225): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.098000 audit[2326]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7f1a1290 a2=0 a3=7ffc7f1a127c items=0 ppid=2275 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.112622 kernel: audit: type=1300 audit(1761873261.098:225): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc7f1a1290 a2=0 a3=7ffc7f1a127c items=0 ppid=2275 pid=2326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.112670 kernel: audit: type=1327 audit(1761873261.098:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:14:21.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:14:21.098000 audit[2327]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.120497 kernel: audit: type=1325 audit(1761873261.098:226): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2327 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.120592 kernel: audit: type=1300 audit(1761873261.098:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc36149460 a2=0 a3=7ffc3614944c items=0 ppid=2275 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.098000 audit[2327]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc36149460 a2=0 a3=7ffc3614944c items=0 ppid=2275 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.128190 kernel: audit: type=1327 audit(1761873261.098:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:14:21.098000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 01:14:21.131990 kernel: audit: type=1325 audit(1761873261.099:227): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.099000 audit[2328]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2328 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.135630 kernel: audit: type=1300 audit(1761873261.099:227): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb4d911a0 a2=0 a3=7ffdb4d9118c items=0 ppid=2275 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.099000 audit[2328]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb4d911a0 a2=0 a3=7ffdb4d9118c items=0 ppid=2275 pid=2328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.099000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:14:21.146999 kernel: audit: type=1327 audit(1761873261.099:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:14:21.147022 kernel: audit: type=1325 audit(1761873261.100:228): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.100000 audit[2330]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.100000 audit[2330]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc619bab0 a2=0 a3=7ffdc619ba9c items=0 ppid=2275 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.100000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:14:21.101000 audit[2331]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.101000 audit[2331]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6c1084b0 a2=0 a3=7ffd6c10849c items=0 ppid=2275 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 01:14:21.102000 audit[2332]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.102000 audit[2332]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea09855c0 a2=0 a3=7ffea09855ac items=0 ppid=2275 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 01:14:21.201000 audit[2333]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.201000 audit[2333]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc5b893340 a2=0 a3=7ffc5b89332c items=0 ppid=2275 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:14:21.203000 audit[2335]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.203000 audit[2335]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffebe704a30 a2=0 a3=7ffebe704a1c items=0 ppid=2275 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 31 01:14:21.206000 audit[2338]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.206000 audit[2338]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd590cc310 a2=0 a3=7ffd590cc2fc items=0 ppid=2275 pid=2338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 31 01:14:21.207000 audit[2339]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2339 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.207000 audit[2339]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0cc9e680 a2=0 a3=7ffd0cc9e66c items=0 ppid=2275 pid=2339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:14:21.209000 audit[2341]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.209000 audit[2341]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccaa7d700 a2=0 a3=7ffccaa7d6ec items=0 ppid=2275 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:14:21.210000 audit[2342]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2342 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.210000 audit[2342]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcbce74d40 a2=0 a3=7ffcbce74d2c items=0 ppid=2275 pid=2342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:14:21.212000 audit[2344]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2344 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.212000 audit[2344]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffee0c1e260 a2=0 a3=7ffee0c1e24c items=0 ppid=2275 pid=2344 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.212000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:14:21.215000 audit[2347]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2347 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.215000 audit[2347]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd87e1ed10 a2=0 a3=7ffd87e1ecfc items=0 ppid=2275 pid=2347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 31 01:14:21.216000 audit[2348]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2348 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.216000 audit[2348]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe08bb7e00 a2=0 a3=7ffe08bb7dec items=0 ppid=2275 pid=2348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:14:21.219000 audit[2350]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2350 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.219000 audit[2350]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc56662fc0 a2=0 a3=7ffc56662fac items=0 ppid=2275 pid=2350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:14:21.220000 audit[2351]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2351 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.220000 audit[2351]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff37651e30 a2=0 a3=7fff37651e1c items=0 ppid=2275 pid=2351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:14:21.222000 audit[2353]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2353 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.222000 audit[2353]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec5d478d0 a2=0 a3=7ffec5d478bc items=0 ppid=2275 pid=2353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:14:21.225000 audit[2356]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2356 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.225000 audit[2356]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb30a04e0 a2=0 a3=7ffcb30a04cc items=0 ppid=2275 pid=2356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:14:21.228000 audit[2359]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2359 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.228000 audit[2359]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe432ce500 a2=0 a3=7ffe432ce4ec items=0 ppid=2275 pid=2359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:14:21.229000 audit[2360]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2360 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.229000 audit[2360]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd396ac1b0 a2=0 a3=7ffd396ac19c items=0 ppid=2275 pid=2360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:14:21.231000 audit[2362]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2362 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.231000 audit[2362]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffff4a92530 a2=0 a3=7ffff4a9251c items=0 ppid=2275 pid=2362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:14:21.235000 audit[2365]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2365 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.235000 audit[2365]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcb0eefef0 a2=0 a3=7ffcb0eefedc items=0 ppid=2275 pid=2365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:14:21.235000 audit[2366]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2366 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.235000 audit[2366]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3b9d6d20 a2=0 a3=7fff3b9d6d0c items=0 ppid=2275 pid=2366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:14:21.238000 audit[2368]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2368 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 01:14:21.238000 audit[2368]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffeb09999c0 a2=0 a3=7ffeb09999ac items=0 ppid=2275 pid=2368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:14:21.261000 audit[2374]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:21.261000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffef26833a0 a2=0 a3=7ffef268338c items=0 ppid=2275 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:21.272000 audit[2374]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:21.272000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffef26833a0 a2=0 a3=7ffef268338c items=0 ppid=2275 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:21.273000 audit[2379]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.273000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffc6abfd20 a2=0 a3=7fffc6abfd0c items=0 ppid=2275 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 01:14:21.277000 audit[2381]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.277000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc368b10f0 a2=0 a3=7ffc368b10dc items=0 ppid=2275 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 31 01:14:21.281000 audit[2384]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.281000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffeb80e0200 a2=0 a3=7ffeb80e01ec items=0 ppid=2275 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 31 01:14:21.282000 audit[2385]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.282000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2a1b9a70 a2=0 a3=7ffc2a1b9a5c items=0 ppid=2275 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 01:14:21.284000 audit[2387]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.284000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe6c8b3ec0 a2=0 a3=7ffe6c8b3eac items=0 ppid=2275 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 01:14:21.285000 audit[2388]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.285000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1ba0dbb0 a2=0 a3=7ffd1ba0db9c items=0 ppid=2275 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 01:14:21.288000 audit[2390]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.288000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd0e2bb90 a2=0 a3=7fffd0e2bb7c items=0 ppid=2275 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 31 01:14:21.291000 audit[2393]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.291000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd00338160 a2=0 a3=7ffd0033814c items=0 ppid=2275 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.291000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 01:14:21.292000 audit[2394]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.292000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4748a200 a2=0 a3=7ffd4748a1ec items=0 ppid=2275 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.292000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 01:14:21.294000 audit[2396]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.294000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcd10dd3c0 a2=0 a3=7ffcd10dd3ac items=0 ppid=2275 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 01:14:21.295000 audit[2397]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.295000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc29a46430 a2=0 a3=7ffc29a4641c items=0 ppid=2275 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 01:14:21.297000 audit[2399]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.297000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff3a531e90 a2=0 a3=7fff3a531e7c items=0 ppid=2275 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.297000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 01:14:21.301000 audit[2402]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.301000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe6890de90 a2=0 a3=7ffe6890de7c items=0 ppid=2275 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.301000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 01:14:21.304000 audit[2405]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.304000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7888ead0 a2=0 a3=7ffe7888eabc items=0 ppid=2275 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.304000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 31 01:14:21.305000 audit[2406]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.305000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe438a99e0 a2=0 a3=7ffe438a99cc items=0 ppid=2275 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 01:14:21.308000 audit[2408]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.308000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffef6c9c130 a2=0 a3=7ffef6c9c11c items=0 ppid=2275 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:14:21.310000 audit[2411]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.310000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffd9f8e0c50 a2=0 a3=7ffd9f8e0c3c items=0 ppid=2275 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 01:14:21.311000 audit[2412]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.311000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6b55b8b0 a2=0 a3=7ffc6b55b89c items=0 ppid=2275 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 01:14:21.314000 audit[2414]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.314000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff118b5390 a2=0 a3=7fff118b537c items=0 ppid=2275 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 01:14:21.315000 audit[2415]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.315000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeca668840 a2=0 a3=7ffeca66882c items=0 ppid=2275 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 01:14:21.317000 audit[2417]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.317000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc964ed010 a2=0 a3=7ffc964ecffc items=0 ppid=2275 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.317000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:14:21.320000 audit[2420]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 01:14:21.320000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf7990d30 a2=0 a3=7ffcf7990d1c items=0 ppid=2275 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 01:14:21.323000 audit[2422]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:14:21.323000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff3c463640 a2=0 a3=7fff3c46362c items=0 ppid=2275 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.323000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:21.323000 audit[2422]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 01:14:21.323000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff3c463640 a2=0 a3=7fff3c46362c items=0 ppid=2275 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:21.323000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:21.506470 kubelet[2122]: E1031 01:14:21.506440 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:21.515626 kubelet[2122]: I1031 01:14:21.515554 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nmzzs" podStartSLOduration=2.515531158 podStartE2EDuration="2.515531158s" podCreationTimestamp="2025-10-31 01:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:14:21.51521594 +0000 UTC m=+9.132669723" watchObservedRunningTime="2025-10-31 01:14:21.515531158 +0000 UTC m=+9.132984921" Oct 31 01:14:22.696763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432490621.mount: Deactivated successfully. Oct 31 01:14:23.808236 env[1316]: time="2025-10-31T01:14:23.808179955Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:23.810932 env[1316]: time="2025-10-31T01:14:23.810909003Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:23.812589 env[1316]: time="2025-10-31T01:14:23.812525620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:23.814171 env[1316]: time="2025-10-31T01:14:23.814121156Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:23.814602 env[1316]: time="2025-10-31T01:14:23.814563199Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 01:14:23.816536 env[1316]: time="2025-10-31T01:14:23.816507174Z" level=info msg="CreateContainer within sandbox \"0e661af6ac2f3c22103a47da897d51e440f8e01d6e0f67789cf7d7ef1d6d0981\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 01:14:23.827979 env[1316]: time="2025-10-31T01:14:23.827912694Z" level=info msg="CreateContainer within sandbox \"0e661af6ac2f3c22103a47da897d51e440f8e01d6e0f67789cf7d7ef1d6d0981\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f1c3da925efd43083e290c091c2830d4112652a0c2ad0cdd7d0233e42190f39\"" Oct 31 01:14:23.828383 env[1316]: time="2025-10-31T01:14:23.828347864Z" level=info msg="StartContainer for \"1f1c3da925efd43083e290c091c2830d4112652a0c2ad0cdd7d0233e42190f39\"" Oct 31 01:14:23.993439 env[1316]: time="2025-10-31T01:14:23.993395794Z" level=info msg="StartContainer for \"1f1c3da925efd43083e290c091c2830d4112652a0c2ad0cdd7d0233e42190f39\" returns successfully" Oct 31 01:14:25.435086 kubelet[2122]: E1031 01:14:25.435018 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:25.454977 kubelet[2122]: I1031 01:14:25.454915 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-zll48" podStartSLOduration=2.587860196 podStartE2EDuration="5.454891427s" podCreationTimestamp="2025-10-31 01:14:20 +0000 UTC" firstStartedPulling="2025-10-31 01:14:20.948346689 +0000 UTC m=+8.565800452" lastFinishedPulling="2025-10-31 01:14:23.815377909 +0000 UTC m=+11.432831683" observedRunningTime="2025-10-31 01:14:24.518152814 +0000 UTC m=+12.135606587" watchObservedRunningTime="2025-10-31 01:14:25.454891427 +0000 UTC m=+13.072345190" Oct 31 01:14:25.512751 kubelet[2122]: E1031 01:14:25.512712 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:26.862339 update_engine[1306]: I1031 01:14:26.862274 1306 update_attempter.cc:509] Updating boot flags... Oct 31 01:14:30.012520 sudo[1474]: pam_unix(sudo:session): session closed for user root Oct 31 01:14:30.010000 audit[1474]: USER_END pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:14:30.020885 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 31 01:14:30.021044 kernel: audit: type=1106 audit(1761873270.010:276): pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:14:30.027533 kernel: audit: type=1104 audit(1761873270.011:277): pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:14:30.011000 audit[1474]: CRED_DISP pid=1474 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 01:14:30.029692 sshd[1468]: pam_unix(sshd:session): session closed for user core Oct 31 01:14:30.029000 audit[1468]: USER_END pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:30.040239 kernel: audit: type=1106 audit(1761873270.029:278): pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:30.039358 systemd[1]: sshd@6-10.0.0.95:22-10.0.0.1:46942.service: Deactivated successfully. Oct 31 01:14:30.040523 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 01:14:30.029000 audit[1468]: CRED_DISP pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:30.041157 systemd-logind[1300]: Session 7 logged out. Waiting for processes to exit. Oct 31 01:14:30.042093 systemd-logind[1300]: Removed session 7. Oct 31 01:14:30.053373 kernel: audit: type=1104 audit(1761873270.029:279): pid=1468 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:30.053425 kernel: audit: type=1131 audit(1761873270.038:280): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:46942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:30.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.95:22-10.0.0.1:46942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:31.320000 audit[2532]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.326781 kernel: audit: type=1325 audit(1761873271.320:281): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.320000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcaf4795d0 a2=0 a3=7ffcaf4795bc items=0 ppid=2275 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.336681 kernel: audit: type=1300 audit(1761873271.320:281): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcaf4795d0 a2=0 a3=7ffcaf4795bc items=0 ppid=2275 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:31.349654 kernel: audit: type=1327 audit(1761873271.320:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:31.336000 audit[2532]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.336000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcaf4795d0 a2=0 a3=0 items=0 ppid=2275 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.371284 kernel: audit: type=1325 audit(1761873271.336:282): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.371426 kernel: audit: type=1300 audit(1761873271.336:282): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcaf4795d0 a2=0 a3=0 items=0 ppid=2275 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:31.359000 audit[2534]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.359000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffed4f65d00 a2=0 a3=7ffed4f65cec items=0 ppid=2275 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:31.378000 audit[2534]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:31.378000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffed4f65d00 a2=0 a3=0 items=0 ppid=2275 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:31.378000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:32.592000 audit[2537]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:32.592000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe1120c9c0 a2=0 a3=7ffe1120c9ac items=0 ppid=2275 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:32.592000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:32.597000 audit[2537]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:32.597000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1120c9c0 a2=0 a3=0 items=0 ppid=2275 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:32.597000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:33.609000 audit[2539]: NETFILTER_CFG table=filter:95 family=2 entries=19 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:33.609000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef4e90670 a2=0 a3=7ffef4e9065c items=0 ppid=2275 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:33.609000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:33.613000 audit[2539]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:33.613000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef4e90670 a2=0 a3=0 items=0 ppid=2275 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:33.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:34.210391 kubelet[2122]: I1031 01:14:34.210296 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218f1e28-a780-4e86-ae86-7e28d3fca997-tigera-ca-bundle\") pod \"calico-typha-75c97cf56f-bjpmp\" (UID: \"218f1e28-a780-4e86-ae86-7e28d3fca997\") " pod="calico-system/calico-typha-75c97cf56f-bjpmp" Oct 31 01:14:34.210391 kubelet[2122]: I1031 01:14:34.210363 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9xd\" (UniqueName: \"kubernetes.io/projected/218f1e28-a780-4e86-ae86-7e28d3fca997-kube-api-access-ln9xd\") pod \"calico-typha-75c97cf56f-bjpmp\" (UID: \"218f1e28-a780-4e86-ae86-7e28d3fca997\") " pod="calico-system/calico-typha-75c97cf56f-bjpmp" Oct 31 01:14:34.210391 kubelet[2122]: I1031 01:14:34.210413 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/218f1e28-a780-4e86-ae86-7e28d3fca997-typha-certs\") pod \"calico-typha-75c97cf56f-bjpmp\" (UID: \"218f1e28-a780-4e86-ae86-7e28d3fca997\") " pod="calico-system/calico-typha-75c97cf56f-bjpmp" Oct 31 01:14:34.411952 kubelet[2122]: I1031 01:14:34.411881 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-var-lib-calico\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.411952 kubelet[2122]: I1031 01:14:34.411930 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-flexvol-driver-host\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.411952 kubelet[2122]: I1031 01:14:34.411953 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnkw\" (UniqueName: \"kubernetes.io/projected/6cc1e934-f9af-4c52-9c75-c59dc61011fe-kube-api-access-xjnkw\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.411952 kubelet[2122]: I1031 01:14:34.411969 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-var-run-calico\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.411952 kubelet[2122]: I1031 01:14:34.411985 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6cc1e934-f9af-4c52-9c75-c59dc61011fe-node-certs\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412304 kubelet[2122]: I1031 01:14:34.411999 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-policysync\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412304 kubelet[2122]: I1031 01:14:34.412014 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-cni-bin-dir\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412304 kubelet[2122]: I1031 01:14:34.412029 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cc1e934-f9af-4c52-9c75-c59dc61011fe-tigera-ca-bundle\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412304 kubelet[2122]: I1031 01:14:34.412044 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-cni-log-dir\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412304 kubelet[2122]: I1031 01:14:34.412057 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-cni-net-dir\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412425 kubelet[2122]: I1031 01:14:34.412071 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-lib-modules\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.412425 kubelet[2122]: I1031 01:14:34.412084 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cc1e934-f9af-4c52-9c75-c59dc61011fe-xtables-lock\") pod \"calico-node-9stjm\" (UID: \"6cc1e934-f9af-4c52-9c75-c59dc61011fe\") " pod="calico-system/calico-node-9stjm" Oct 31 01:14:34.440065 kubelet[2122]: E1031 01:14:34.440011 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:34.441068 env[1316]: time="2025-10-31T01:14:34.441007918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c97cf56f-bjpmp,Uid:218f1e28-a780-4e86-ae86-7e28d3fca997,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:34.461251 env[1316]: time="2025-10-31T01:14:34.461055328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:34.461251 env[1316]: time="2025-10-31T01:14:34.461100164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:34.461251 env[1316]: time="2025-10-31T01:14:34.461110594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:34.461452 env[1316]: time="2025-10-31T01:14:34.461326209Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f445fce9861f8fa53dd02bde748b7bf848d77854ffb82542324b789b71f3676f pid=2549 runtime=io.containerd.runc.v2 Oct 31 01:14:34.514719 env[1316]: time="2025-10-31T01:14:34.514678069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c97cf56f-bjpmp,Uid:218f1e28-a780-4e86-ae86-7e28d3fca997,Namespace:calico-system,Attempt:0,} returns sandbox id \"f445fce9861f8fa53dd02bde748b7bf848d77854ffb82542324b789b71f3676f\"" Oct 31 01:14:34.515839 kubelet[2122]: E1031 01:14:34.515806 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:34.517544 env[1316]: time="2025-10-31T01:14:34.517515438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 01:14:34.517997 kubelet[2122]: E1031 01:14:34.517954 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:34.517997 kubelet[2122]: W1031 01:14:34.517977 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:34.518129 kubelet[2122]: E1031 01:14:34.518012 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:34.624000 audit[2585]: NETFILTER_CFG table=filter:97 family=2 entries=21 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:34.624000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd4764d580 a2=0 a3=7ffd4764d56c items=0 ppid=2275 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:34.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:34.630000 audit[2585]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:34.630000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd4764d580 a2=0 a3=0 items=0 ppid=2275 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:34.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:35.008987 kubelet[2122]: E1031 01:14:35.008952 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.008987 kubelet[2122]: W1031 01:14:35.008979 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.008987 kubelet[2122]: E1031 01:14:35.009012 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.060201 kubelet[2122]: E1031 01:14:35.059155 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:35.109769 kubelet[2122]: E1031 01:14:35.109720 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.109769 kubelet[2122]: W1031 01:14:35.109757 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.110003 kubelet[2122]: E1031 01:14:35.109792 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.110032 kubelet[2122]: E1031 01:14:35.110023 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.110061 kubelet[2122]: W1031 01:14:35.110038 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.110088 kubelet[2122]: E1031 01:14:35.110057 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.110275 kubelet[2122]: E1031 01:14:35.110247 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.110275 kubelet[2122]: W1031 01:14:35.110267 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.110344 kubelet[2122]: E1031 01:14:35.110284 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.111903 kubelet[2122]: E1031 01:14:35.111879 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.111903 kubelet[2122]: W1031 01:14:35.111895 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.111973 kubelet[2122]: E1031 01:14:35.111912 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.112132 kubelet[2122]: E1031 01:14:35.112109 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.112161 kubelet[2122]: W1031 01:14:35.112129 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.112161 kubelet[2122]: E1031 01:14:35.112144 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.112367 kubelet[2122]: E1031 01:14:35.112337 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.112367 kubelet[2122]: W1031 01:14:35.112356 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.112425 kubelet[2122]: E1031 01:14:35.112370 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.112576 kubelet[2122]: E1031 01:14:35.112552 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.112576 kubelet[2122]: W1031 01:14:35.112572 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.112647 kubelet[2122]: E1031 01:14:35.112587 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.112810 kubelet[2122]: E1031 01:14:35.112792 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.112841 kubelet[2122]: W1031 01:14:35.112810 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.112841 kubelet[2122]: E1031 01:14:35.112828 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.113028 kubelet[2122]: E1031 01:14:35.113012 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.113057 kubelet[2122]: W1031 01:14:35.113029 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.113057 kubelet[2122]: E1031 01:14:35.113044 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.113227 kubelet[2122]: E1031 01:14:35.113211 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.113253 kubelet[2122]: W1031 01:14:35.113230 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.113253 kubelet[2122]: E1031 01:14:35.113247 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.113446 kubelet[2122]: E1031 01:14:35.113429 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.113474 kubelet[2122]: W1031 01:14:35.113448 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.113474 kubelet[2122]: E1031 01:14:35.113464 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.113696 kubelet[2122]: E1031 01:14:35.113678 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.113696 kubelet[2122]: W1031 01:14:35.113692 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.113766 kubelet[2122]: E1031 01:14:35.113705 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.113905 kubelet[2122]: E1031 01:14:35.113889 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.113905 kubelet[2122]: W1031 01:14:35.113901 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.113981 kubelet[2122]: E1031 01:14:35.113911 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.114079 kubelet[2122]: E1031 01:14:35.114063 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.114079 kubelet[2122]: W1031 01:14:35.114075 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.114145 kubelet[2122]: E1031 01:14:35.114084 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.114284 kubelet[2122]: E1031 01:14:35.114267 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.114284 kubelet[2122]: W1031 01:14:35.114280 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.114378 kubelet[2122]: E1031 01:14:35.114291 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.114489 kubelet[2122]: E1031 01:14:35.114472 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.114489 kubelet[2122]: W1031 01:14:35.114486 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.114581 kubelet[2122]: E1031 01:14:35.114496 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.114778 kubelet[2122]: E1031 01:14:35.114734 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.114778 kubelet[2122]: W1031 01:14:35.114760 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.114778 kubelet[2122]: E1031 01:14:35.114771 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.115041 kubelet[2122]: E1031 01:14:35.115024 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.115041 kubelet[2122]: W1031 01:14:35.115039 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.115120 kubelet[2122]: E1031 01:14:35.115050 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.115261 kubelet[2122]: E1031 01:14:35.115242 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.115261 kubelet[2122]: W1031 01:14:35.115258 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.115363 kubelet[2122]: E1031 01:14:35.115273 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.115494 kubelet[2122]: E1031 01:14:35.115476 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.115494 kubelet[2122]: W1031 01:14:35.115491 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.115572 kubelet[2122]: E1031 01:14:35.115507 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.116755 kubelet[2122]: E1031 01:14:35.116735 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.116755 kubelet[2122]: W1031 01:14:35.116748 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.116755 kubelet[2122]: E1031 01:14:35.116757 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.116878 kubelet[2122]: I1031 01:14:35.116788 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bd0bddee-8a85-4f55-a28b-a795608cb1fb-socket-dir\") pod \"csi-node-driver-fd8js\" (UID: \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\") " pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:35.117012 kubelet[2122]: E1031 01:14:35.116987 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.117012 kubelet[2122]: W1031 01:14:35.117004 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.117166 kubelet[2122]: E1031 01:14:35.117021 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.117166 kubelet[2122]: I1031 01:14:35.117045 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bd0bddee-8a85-4f55-a28b-a795608cb1fb-registration-dir\") pod \"csi-node-driver-fd8js\" (UID: \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\") " pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:35.117278 kubelet[2122]: E1031 01:14:35.117207 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.117278 kubelet[2122]: W1031 01:14:35.117219 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.117278 kubelet[2122]: E1031 01:14:35.117236 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.117437 kubelet[2122]: E1031 01:14:35.117413 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.117437 kubelet[2122]: W1031 01:14:35.117425 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.117538 kubelet[2122]: E1031 01:14:35.117440 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.117656 kubelet[2122]: E1031 01:14:35.117638 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.117656 kubelet[2122]: W1031 01:14:35.117651 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.117742 kubelet[2122]: E1031 01:14:35.117667 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.117742 kubelet[2122]: I1031 01:14:35.117687 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7fvl\" (UniqueName: \"kubernetes.io/projected/bd0bddee-8a85-4f55-a28b-a795608cb1fb-kube-api-access-n7fvl\") pod \"csi-node-driver-fd8js\" (UID: \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\") " pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:35.117913 kubelet[2122]: E1031 01:14:35.117895 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.117946 kubelet[2122]: W1031 01:14:35.117914 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.117946 kubelet[2122]: E1031 01:14:35.117939 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.118135 kubelet[2122]: E1031 01:14:35.118118 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.118170 kubelet[2122]: W1031 01:14:35.118137 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.118170 kubelet[2122]: E1031 01:14:35.118160 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.118382 kubelet[2122]: E1031 01:14:35.118366 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.118415 kubelet[2122]: W1031 01:14:35.118384 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.118415 kubelet[2122]: E1031 01:14:35.118407 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.118476 kubelet[2122]: I1031 01:14:35.118435 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bd0bddee-8a85-4f55-a28b-a795608cb1fb-varrun\") pod \"csi-node-driver-fd8js\" (UID: \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\") " pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:35.118687 kubelet[2122]: E1031 01:14:35.118671 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.118716 kubelet[2122]: W1031 01:14:35.118688 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.118753 kubelet[2122]: E1031 01:14:35.118715 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.118753 kubelet[2122]: I1031 01:14:35.118741 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd0bddee-8a85-4f55-a28b-a795608cb1fb-kubelet-dir\") pod \"csi-node-driver-fd8js\" (UID: \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\") " pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:35.118952 kubelet[2122]: E1031 01:14:35.118935 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.118978 kubelet[2122]: W1031 01:14:35.118956 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.119014 kubelet[2122]: E1031 01:14:35.118992 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.119158 kubelet[2122]: E1031 01:14:35.119142 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.119188 kubelet[2122]: W1031 01:14:35.119160 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.119213 kubelet[2122]: E1031 01:14:35.119189 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.119365 kubelet[2122]: E1031 01:14:35.119349 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.119393 kubelet[2122]: W1031 01:14:35.119367 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.119418 kubelet[2122]: E1031 01:14:35.119393 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.119583 kubelet[2122]: E1031 01:14:35.119567 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.119634 kubelet[2122]: W1031 01:14:35.119585 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.119634 kubelet[2122]: E1031 01:14:35.119602 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.119847 kubelet[2122]: E1031 01:14:35.119827 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.119893 kubelet[2122]: W1031 01:14:35.119846 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.119893 kubelet[2122]: E1031 01:14:35.119863 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.120077 kubelet[2122]: E1031 01:14:35.120058 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.120124 kubelet[2122]: W1031 01:14:35.120077 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.120124 kubelet[2122]: E1031 01:14:35.120095 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.220141 kubelet[2122]: E1031 01:14:35.220103 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.220141 kubelet[2122]: W1031 01:14:35.220126 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.220679 kubelet[2122]: E1031 01:14:35.220156 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.220679 kubelet[2122]: E1031 01:14:35.220372 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.220679 kubelet[2122]: W1031 01:14:35.220383 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.220679 kubelet[2122]: E1031 01:14:35.220399 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.220679 kubelet[2122]: E1031 01:14:35.220578 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.220679 kubelet[2122]: W1031 01:14:35.220590 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.220679 kubelet[2122]: E1031 01:14:35.220622 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.220903 kubelet[2122]: E1031 01:14:35.220759 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.220903 kubelet[2122]: W1031 01:14:35.220765 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.220903 kubelet[2122]: E1031 01:14:35.220777 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.221042 kubelet[2122]: E1031 01:14:35.221027 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.221042 kubelet[2122]: W1031 01:14:35.221035 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.221131 kubelet[2122]: E1031 01:14:35.221047 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.221244 kubelet[2122]: E1031 01:14:35.221225 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.221244 kubelet[2122]: W1031 01:14:35.221238 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.221350 kubelet[2122]: E1031 01:14:35.221264 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.221472 kubelet[2122]: E1031 01:14:35.221457 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.221530 kubelet[2122]: W1031 01:14:35.221472 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.221530 kubelet[2122]: E1031 01:14:35.221494 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.221708 kubelet[2122]: E1031 01:14:35.221690 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.221708 kubelet[2122]: W1031 01:14:35.221702 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.221807 kubelet[2122]: E1031 01:14:35.221750 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.221891 kubelet[2122]: E1031 01:14:35.221875 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.221891 kubelet[2122]: W1031 01:14:35.221887 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.221969 kubelet[2122]: E1031 01:14:35.221915 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.222110 kubelet[2122]: E1031 01:14:35.222092 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.222110 kubelet[2122]: W1031 01:14:35.222104 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.222198 kubelet[2122]: E1031 01:14:35.222129 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.222334 kubelet[2122]: E1031 01:14:35.222305 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.222334 kubelet[2122]: W1031 01:14:35.222318 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.222422 kubelet[2122]: E1031 01:14:35.222337 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.222510 kubelet[2122]: E1031 01:14:35.222495 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.222510 kubelet[2122]: W1031 01:14:35.222506 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.222595 kubelet[2122]: E1031 01:14:35.222522 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.222706 kubelet[2122]: E1031 01:14:35.222692 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.222706 kubelet[2122]: W1031 01:14:35.222703 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.222782 kubelet[2122]: E1031 01:14:35.222719 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.222936 kubelet[2122]: E1031 01:14:35.222912 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.222936 kubelet[2122]: W1031 01:14:35.222924 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223031 kubelet[2122]: E1031 01:14:35.222950 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223090 kubelet[2122]: E1031 01:14:35.223076 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223134 kubelet[2122]: W1031 01:14:35.223090 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223134 kubelet[2122]: E1031 01:14:35.223126 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223273 kubelet[2122]: E1031 01:14:35.223258 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223273 kubelet[2122]: W1031 01:14:35.223269 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223376 kubelet[2122]: E1031 01:14:35.223294 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223439 kubelet[2122]: E1031 01:14:35.223424 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223439 kubelet[2122]: W1031 01:14:35.223435 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223512 kubelet[2122]: E1031 01:14:35.223461 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223592 kubelet[2122]: E1031 01:14:35.223579 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223592 kubelet[2122]: W1031 01:14:35.223590 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223698 kubelet[2122]: E1031 01:14:35.223628 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223817 kubelet[2122]: E1031 01:14:35.223803 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223817 kubelet[2122]: W1031 01:14:35.223814 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.223897 kubelet[2122]: E1031 01:14:35.223830 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.223988 kubelet[2122]: E1031 01:14:35.223975 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.223988 kubelet[2122]: W1031 01:14:35.223984 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.224083 kubelet[2122]: E1031 01:14:35.223995 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.224173 kubelet[2122]: E1031 01:14:35.224157 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.224173 kubelet[2122]: W1031 01:14:35.224169 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.224257 kubelet[2122]: E1031 01:14:35.224186 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.224526 kubelet[2122]: E1031 01:14:35.224501 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.224526 kubelet[2122]: W1031 01:14:35.224522 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.224646 kubelet[2122]: E1031 01:14:35.224553 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.224841 kubelet[2122]: E1031 01:14:35.224823 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.224841 kubelet[2122]: W1031 01:14:35.224836 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.224917 kubelet[2122]: E1031 01:14:35.224853 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.225099 kubelet[2122]: E1031 01:14:35.225082 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.225099 kubelet[2122]: W1031 01:14:35.225094 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.225196 kubelet[2122]: E1031 01:14:35.225106 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.225328 kubelet[2122]: E1031 01:14:35.225307 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.225328 kubelet[2122]: W1031 01:14:35.225322 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.225444 kubelet[2122]: E1031 01:14:35.225334 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.232791 kubelet[2122]: E1031 01:14:35.232757 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:35.232791 kubelet[2122]: W1031 01:14:35.232778 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:35.232923 kubelet[2122]: E1031 01:14:35.232799 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:35.245993 kubelet[2122]: E1031 01:14:35.245960 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:35.246573 env[1316]: time="2025-10-31T01:14:35.246528015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9stjm,Uid:6cc1e934-f9af-4c52-9c75-c59dc61011fe,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:35.261941 env[1316]: time="2025-10-31T01:14:35.261821866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:35.261941 env[1316]: time="2025-10-31T01:14:35.261856062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:35.262131 env[1316]: time="2025-10-31T01:14:35.261865560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:35.262682 env[1316]: time="2025-10-31T01:14:35.262597384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4 pid=2668 runtime=io.containerd.runc.v2 Oct 31 01:14:35.294847 env[1316]: time="2025-10-31T01:14:35.294802370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9stjm,Uid:6cc1e934-f9af-4c52-9c75-c59dc61011fe,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\"" Oct 31 01:14:35.296158 kubelet[2122]: E1031 01:14:35.295828 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:36.381339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180642017.mount: Deactivated successfully. Oct 31 01:14:36.486310 kubelet[2122]: E1031 01:14:36.486260 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:37.138642 env[1316]: time="2025-10-31T01:14:37.138567919Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:37.140656 env[1316]: time="2025-10-31T01:14:37.140603046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:37.142472 env[1316]: time="2025-10-31T01:14:37.142411519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:37.143952 env[1316]: time="2025-10-31T01:14:37.143921601Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:37.144211 env[1316]: time="2025-10-31T01:14:37.144180637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 01:14:37.147310 env[1316]: time="2025-10-31T01:14:37.146827126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 01:14:37.162288 env[1316]: time="2025-10-31T01:14:37.162243859Z" level=info msg="CreateContainer within sandbox \"f445fce9861f8fa53dd02bde748b7bf848d77854ffb82542324b789b71f3676f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 01:14:37.175570 env[1316]: time="2025-10-31T01:14:37.175521725Z" level=info msg="CreateContainer within sandbox \"f445fce9861f8fa53dd02bde748b7bf848d77854ffb82542324b789b71f3676f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"257b9ee948c08fbe0afd2941798a93fbb181deb9e54febc93356989ea0ee35dc\"" Oct 31 01:14:37.176839 env[1316]: time="2025-10-31T01:14:37.176812227Z" level=info msg="StartContainer for \"257b9ee948c08fbe0afd2941798a93fbb181deb9e54febc93356989ea0ee35dc\"" Oct 31 01:14:37.252952 env[1316]: time="2025-10-31T01:14:37.252876370Z" level=info msg="StartContainer for \"257b9ee948c08fbe0afd2941798a93fbb181deb9e54febc93356989ea0ee35dc\" returns successfully" Oct 31 01:14:37.545013 kubelet[2122]: E1031 01:14:37.544974 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:37.559003 kubelet[2122]: I1031 01:14:37.558534 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c97cf56f-bjpmp" podStartSLOduration=0.928900168 podStartE2EDuration="3.558502836s" podCreationTimestamp="2025-10-31 01:14:34 +0000 UTC" firstStartedPulling="2025-10-31 01:14:34.516891369 +0000 UTC m=+22.134345132" lastFinishedPulling="2025-10-31 01:14:37.146494027 +0000 UTC m=+24.763947800" observedRunningTime="2025-10-31 01:14:37.557557096 +0000 UTC m=+25.175010869" watchObservedRunningTime="2025-10-31 01:14:37.558502836 +0000 UTC m=+25.175956589" Oct 31 01:14:37.634152 kubelet[2122]: E1031 01:14:37.634095 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.634152 kubelet[2122]: W1031 01:14:37.634124 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.634689 kubelet[2122]: E1031 01:14:37.634664 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.634986 kubelet[2122]: E1031 01:14:37.634957 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.634986 kubelet[2122]: W1031 01:14:37.634968 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.634986 kubelet[2122]: E1031 01:14:37.634977 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.635225 kubelet[2122]: E1031 01:14:37.635196 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.635225 kubelet[2122]: W1031 01:14:37.635207 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.635225 kubelet[2122]: E1031 01:14:37.635214 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.635468 kubelet[2122]: E1031 01:14:37.635455 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.635468 kubelet[2122]: W1031 01:14:37.635463 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.635539 kubelet[2122]: E1031 01:14:37.635470 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.635657 kubelet[2122]: E1031 01:14:37.635643 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.635657 kubelet[2122]: W1031 01:14:37.635654 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.635731 kubelet[2122]: E1031 01:14:37.635662 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.635936 kubelet[2122]: E1031 01:14:37.635901 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.635936 kubelet[2122]: W1031 01:14:37.635929 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636076 kubelet[2122]: E1031 01:14:37.635958 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.636191 kubelet[2122]: E1031 01:14:37.636172 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.636191 kubelet[2122]: W1031 01:14:37.636182 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636191 kubelet[2122]: E1031 01:14:37.636190 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.636353 kubelet[2122]: E1031 01:14:37.636347 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.636353 kubelet[2122]: W1031 01:14:37.636354 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636459 kubelet[2122]: E1031 01:14:37.636363 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.636550 kubelet[2122]: E1031 01:14:37.636533 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.636550 kubelet[2122]: W1031 01:14:37.636542 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636550 kubelet[2122]: E1031 01:14:37.636550 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.636704 kubelet[2122]: E1031 01:14:37.636698 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.636704 kubelet[2122]: W1031 01:14:37.636705 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636794 kubelet[2122]: E1031 01:14:37.636712 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.636870 kubelet[2122]: E1031 01:14:37.636853 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.636870 kubelet[2122]: W1031 01:14:37.636862 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.636870 kubelet[2122]: E1031 01:14:37.636869 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.637068 kubelet[2122]: E1031 01:14:37.637051 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.637068 kubelet[2122]: W1031 01:14:37.637061 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.637068 kubelet[2122]: E1031 01:14:37.637069 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.637222 kubelet[2122]: E1031 01:14:37.637209 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.637222 kubelet[2122]: W1031 01:14:37.637218 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.637295 kubelet[2122]: E1031 01:14:37.637226 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.637393 kubelet[2122]: E1031 01:14:37.637379 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.637393 kubelet[2122]: W1031 01:14:37.637389 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.637393 kubelet[2122]: E1031 01:14:37.637396 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.637558 kubelet[2122]: E1031 01:14:37.637545 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.637558 kubelet[2122]: W1031 01:14:37.637556 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.637630 kubelet[2122]: E1031 01:14:37.637563 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.639454 kubelet[2122]: E1031 01:14:37.639431 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.639454 kubelet[2122]: W1031 01:14:37.639445 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.639454 kubelet[2122]: E1031 01:14:37.639454 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.639707 kubelet[2122]: E1031 01:14:37.639687 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.639707 kubelet[2122]: W1031 01:14:37.639698 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.639707 kubelet[2122]: E1031 01:14:37.639710 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.639937 kubelet[2122]: E1031 01:14:37.639914 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.639937 kubelet[2122]: W1031 01:14:37.639929 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.639937 kubelet[2122]: E1031 01:14:37.639944 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.640151 kubelet[2122]: E1031 01:14:37.640133 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.640151 kubelet[2122]: W1031 01:14:37.640142 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.640151 kubelet[2122]: E1031 01:14:37.640155 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.640419 kubelet[2122]: E1031 01:14:37.640383 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.640419 kubelet[2122]: W1031 01:14:37.640413 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.640531 kubelet[2122]: E1031 01:14:37.640454 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.640855 kubelet[2122]: E1031 01:14:37.640833 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.640855 kubelet[2122]: W1031 01:14:37.640849 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.640970 kubelet[2122]: E1031 01:14:37.640869 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.641076 kubelet[2122]: E1031 01:14:37.641057 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.641076 kubelet[2122]: W1031 01:14:37.641071 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.641185 kubelet[2122]: E1031 01:14:37.641086 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.641416 kubelet[2122]: E1031 01:14:37.641385 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.641416 kubelet[2122]: W1031 01:14:37.641401 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.641416 kubelet[2122]: E1031 01:14:37.641416 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.641694 kubelet[2122]: E1031 01:14:37.641683 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.641694 kubelet[2122]: W1031 01:14:37.641692 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.641757 kubelet[2122]: E1031 01:14:37.641706 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.641982 kubelet[2122]: E1031 01:14:37.641957 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.641982 kubelet[2122]: W1031 01:14:37.641975 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.642078 kubelet[2122]: E1031 01:14:37.642000 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.642297 kubelet[2122]: E1031 01:14:37.642273 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.642297 kubelet[2122]: W1031 01:14:37.642289 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.642395 kubelet[2122]: E1031 01:14:37.642307 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.642603 kubelet[2122]: E1031 01:14:37.642565 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.642603 kubelet[2122]: W1031 01:14:37.642594 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.642757 kubelet[2122]: E1031 01:14:37.642646 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.642922 kubelet[2122]: E1031 01:14:37.642897 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.642922 kubelet[2122]: W1031 01:14:37.642916 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.643057 kubelet[2122]: E1031 01:14:37.642946 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.643153 kubelet[2122]: E1031 01:14:37.643124 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.643153 kubelet[2122]: W1031 01:14:37.643139 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.643153 kubelet[2122]: E1031 01:14:37.643155 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.643443 kubelet[2122]: E1031 01:14:37.643403 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.643443 kubelet[2122]: W1031 01:14:37.643435 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.643576 kubelet[2122]: E1031 01:14:37.643454 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.643808 kubelet[2122]: E1031 01:14:37.643768 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.643808 kubelet[2122]: W1031 01:14:37.643789 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.643907 kubelet[2122]: E1031 01:14:37.643815 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.644221 kubelet[2122]: E1031 01:14:37.644201 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.644221 kubelet[2122]: W1031 01:14:37.644217 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.644318 kubelet[2122]: E1031 01:14:37.644256 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:37.644466 kubelet[2122]: E1031 01:14:37.644444 2122 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:14:37.644466 kubelet[2122]: W1031 01:14:37.644461 2122 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:14:37.644580 kubelet[2122]: E1031 01:14:37.644475 2122 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:14:38.457812 env[1316]: time="2025-10-31T01:14:38.457761761Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:38.459424 env[1316]: time="2025-10-31T01:14:38.459395357Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:38.460897 env[1316]: time="2025-10-31T01:14:38.460859328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:38.462378 env[1316]: time="2025-10-31T01:14:38.462349449Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:38.462802 env[1316]: time="2025-10-31T01:14:38.462715840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 01:14:38.466046 env[1316]: time="2025-10-31T01:14:38.466000194Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 01:14:38.482191 env[1316]: time="2025-10-31T01:14:38.482111402Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e\"" Oct 31 01:14:38.482726 env[1316]: time="2025-10-31T01:14:38.482689177Z" level=info msg="StartContainer for \"97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e\"" Oct 31 01:14:38.487279 kubelet[2122]: E1031 01:14:38.486354 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:38.507523 systemd[1]: run-containerd-runc-k8s.io-97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e-runc.JiMMym.mount: Deactivated successfully. Oct 31 01:14:38.544358 env[1316]: time="2025-10-31T01:14:38.544312077Z" level=info msg="StartContainer for \"97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e\" returns successfully" Oct 31 01:14:38.545675 kubelet[2122]: I1031 01:14:38.545642 2122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:14:38.546072 kubelet[2122]: E1031 01:14:38.546048 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:38.946714 env[1316]: time="2025-10-31T01:14:38.946661235Z" level=info msg="shim disconnected" id=97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e Oct 31 01:14:38.946714 env[1316]: time="2025-10-31T01:14:38.946711110Z" level=warning msg="cleaning up after shim disconnected" id=97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e namespace=k8s.io Oct 31 01:14:38.946714 env[1316]: time="2025-10-31T01:14:38.946720047Z" level=info msg="cleaning up dead shim" Oct 31 01:14:38.955666 env[1316]: time="2025-10-31T01:14:38.955620066Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:14:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2828 runtime=io.containerd.runc.v2\n" Oct 31 01:14:39.478262 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97e807855bf16cc03af92bcba365963d1419b19f329eec6ac96343edf81e440e-rootfs.mount: Deactivated successfully. Oct 31 01:14:39.549882 kubelet[2122]: E1031 01:14:39.549832 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:39.550672 env[1316]: time="2025-10-31T01:14:39.550627194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 01:14:40.486868 kubelet[2122]: E1031 01:14:40.486801 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:42.487064 kubelet[2122]: E1031 01:14:42.486974 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:44.264848 env[1316]: time="2025-10-31T01:14:44.264786189Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:44.266932 env[1316]: time="2025-10-31T01:14:44.266894307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:44.268601 env[1316]: time="2025-10-31T01:14:44.268565732Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:44.270051 env[1316]: time="2025-10-31T01:14:44.270004123Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:44.270639 env[1316]: time="2025-10-31T01:14:44.270586272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 01:14:44.272602 env[1316]: time="2025-10-31T01:14:44.272572017Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 01:14:44.287507 env[1316]: time="2025-10-31T01:14:44.287452635Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5\"" Oct 31 01:14:44.287937 env[1316]: time="2025-10-31T01:14:44.287910076Z" level=info msg="StartContainer for \"532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5\"" Oct 31 01:14:44.486233 kubelet[2122]: E1031 01:14:44.486174 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:45.661969 env[1316]: time="2025-10-31T01:14:45.661879791Z" level=info msg="StartContainer for \"532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5\" returns successfully" Oct 31 01:14:45.663647 kubelet[2122]: E1031 01:14:45.662709 2122 kubelet.go:2573] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.177s" Oct 31 01:14:46.488274 kubelet[2122]: E1031 01:14:46.488230 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:46.539534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5-rootfs.mount: Deactivated successfully. Oct 31 01:14:46.544121 env[1316]: time="2025-10-31T01:14:46.544077424Z" level=info msg="shim disconnected" id=532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5 Oct 31 01:14:46.544204 env[1316]: time="2025-10-31T01:14:46.544123152Z" level=warning msg="cleaning up after shim disconnected" id=532a6dd8a5e3b595773bf4f83a371ad10904c92fc1b8f81b07bd25aee495bbd5 namespace=k8s.io Oct 31 01:14:46.544204 env[1316]: time="2025-10-31T01:14:46.544132880Z" level=info msg="cleaning up dead shim" Oct 31 01:14:46.550243 env[1316]: time="2025-10-31T01:14:46.550193317Z" level=warning msg="cleanup warnings time=\"2025-10-31T01:14:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2893 runtime=io.containerd.runc.v2\n" Oct 31 01:14:46.592999 kubelet[2122]: I1031 01:14:46.592952 2122 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 01:14:46.669501 kubelet[2122]: E1031 01:14:46.669469 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:46.670874 env[1316]: time="2025-10-31T01:14:46.670807777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 01:14:46.700471 kubelet[2122]: I1031 01:14:46.700430 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7a793cf-29da-4092-aaf4-95f63c307028-calico-apiserver-certs\") pod \"calico-apiserver-86687d576-r924d\" (UID: \"b7a793cf-29da-4092-aaf4-95f63c307028\") " pod="calico-apiserver/calico-apiserver-86687d576-r924d" Oct 31 01:14:46.700471 kubelet[2122]: I1031 01:14:46.700471 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50cdc712-db7a-41da-8129-57ca3765d884-goldmane-ca-bundle\") pod \"goldmane-666569f655-wj6mp\" (UID: \"50cdc712-db7a-41da-8129-57ca3765d884\") " pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:46.700778 kubelet[2122]: I1031 01:14:46.700504 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2pd4\" (UniqueName: \"kubernetes.io/projected/aa45bbe1-c342-47d9-b9fb-8fc8197ae119-kube-api-access-r2pd4\") pod \"coredns-668d6bf9bc-kx9d2\" (UID: \"aa45bbe1-c342-47d9-b9fb-8fc8197ae119\") " pod="kube-system/coredns-668d6bf9bc-kx9d2" Oct 31 01:14:46.700778 kubelet[2122]: I1031 01:14:46.700521 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zbds\" (UniqueName: \"kubernetes.io/projected/50cdc712-db7a-41da-8129-57ca3765d884-kube-api-access-8zbds\") pod \"goldmane-666569f655-wj6mp\" (UID: \"50cdc712-db7a-41da-8129-57ca3765d884\") " pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:46.700778 kubelet[2122]: I1031 01:14:46.700555 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/50cdc712-db7a-41da-8129-57ca3765d884-goldmane-key-pair\") pod \"goldmane-666569f655-wj6mp\" (UID: \"50cdc712-db7a-41da-8129-57ca3765d884\") " pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:46.700778 kubelet[2122]: I1031 01:14:46.700570 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-ca-bundle\") pod \"whisker-c78cb78df-xltv5\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " pod="calico-system/whisker-c78cb78df-xltv5" Oct 31 01:14:46.700778 kubelet[2122]: I1031 01:14:46.700585 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453498c9-0a59-4ad4-bd57-363364a2fea3-config-volume\") pod \"coredns-668d6bf9bc-p8xhs\" (UID: \"453498c9-0a59-4ad4-bd57-363364a2fea3\") " pod="kube-system/coredns-668d6bf9bc-p8xhs" Oct 31 01:14:46.701050 kubelet[2122]: I1031 01:14:46.700604 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnks7\" (UniqueName: \"kubernetes.io/projected/cbcd2bd9-2395-4730-b047-aac75539fb47-kube-api-access-hnks7\") pod \"calico-kube-controllers-85445fc7bc-269qr\" (UID: \"cbcd2bd9-2395-4730-b047-aac75539fb47\") " pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" Oct 31 01:14:46.701050 kubelet[2122]: I1031 01:14:46.700647 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50cdc712-db7a-41da-8129-57ca3765d884-config\") pod \"goldmane-666569f655-wj6mp\" (UID: \"50cdc712-db7a-41da-8129-57ca3765d884\") " pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:46.701050 kubelet[2122]: I1031 01:14:46.700673 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwxj\" (UniqueName: \"kubernetes.io/projected/7cb997cc-c908-4ddb-9523-a2aea9785811-kube-api-access-gxwxj\") pod \"calico-apiserver-86687d576-lcpfh\" (UID: \"7cb997cc-c908-4ddb-9523-a2aea9785811\") " pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" Oct 31 01:14:46.701050 kubelet[2122]: I1031 01:14:46.700700 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgc4\" (UniqueName: \"kubernetes.io/projected/3bcad0e7-1720-4160-950e-8a81a3313d2c-kube-api-access-fzgc4\") pod \"whisker-c78cb78df-xltv5\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " pod="calico-system/whisker-c78cb78df-xltv5" Oct 31 01:14:46.701050 kubelet[2122]: I1031 01:14:46.700715 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-backend-key-pair\") pod \"whisker-c78cb78df-xltv5\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " pod="calico-system/whisker-c78cb78df-xltv5" Oct 31 01:14:46.701193 kubelet[2122]: I1031 01:14:46.700731 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkrb\" (UniqueName: \"kubernetes.io/projected/453498c9-0a59-4ad4-bd57-363364a2fea3-kube-api-access-hrkrb\") pod \"coredns-668d6bf9bc-p8xhs\" (UID: \"453498c9-0a59-4ad4-bd57-363364a2fea3\") " pod="kube-system/coredns-668d6bf9bc-p8xhs" Oct 31 01:14:46.701193 kubelet[2122]: I1031 01:14:46.700748 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbcd2bd9-2395-4730-b047-aac75539fb47-tigera-ca-bundle\") pod \"calico-kube-controllers-85445fc7bc-269qr\" (UID: \"cbcd2bd9-2395-4730-b047-aac75539fb47\") " pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" Oct 31 01:14:46.701193 kubelet[2122]: I1031 01:14:46.700770 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa45bbe1-c342-47d9-b9fb-8fc8197ae119-config-volume\") pod \"coredns-668d6bf9bc-kx9d2\" (UID: \"aa45bbe1-c342-47d9-b9fb-8fc8197ae119\") " pod="kube-system/coredns-668d6bf9bc-kx9d2" Oct 31 01:14:46.701193 kubelet[2122]: I1031 01:14:46.700785 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7cb997cc-c908-4ddb-9523-a2aea9785811-calico-apiserver-certs\") pod \"calico-apiserver-86687d576-lcpfh\" (UID: \"7cb997cc-c908-4ddb-9523-a2aea9785811\") " pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" Oct 31 01:14:46.701193 kubelet[2122]: I1031 01:14:46.700801 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jfv\" (UniqueName: \"kubernetes.io/projected/b7a793cf-29da-4092-aaf4-95f63c307028-kube-api-access-r9jfv\") pod \"calico-apiserver-86687d576-r924d\" (UID: \"b7a793cf-29da-4092-aaf4-95f63c307028\") " pod="calico-apiserver/calico-apiserver-86687d576-r924d" Oct 31 01:14:46.918530 env[1316]: time="2025-10-31T01:14:46.918491573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wj6mp,Uid:50cdc712-db7a-41da-8129-57ca3765d884,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:46.924077 kubelet[2122]: E1031 01:14:46.924044 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:46.924526 env[1316]: time="2025-10-31T01:14:46.924329898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8xhs,Uid:453498c9-0a59-4ad4-bd57-363364a2fea3,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:46.928006 env[1316]: time="2025-10-31T01:14:46.927884646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-lcpfh,Uid:7cb997cc-c908-4ddb-9523-a2aea9785811,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:14:46.929182 kubelet[2122]: E1031 01:14:46.929152 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:46.929664 env[1316]: time="2025-10-31T01:14:46.929623616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kx9d2,Uid:aa45bbe1-c342-47d9-b9fb-8fc8197ae119,Namespace:kube-system,Attempt:0,}" Oct 31 01:14:46.931241 env[1316]: time="2025-10-31T01:14:46.931194677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85445fc7bc-269qr,Uid:cbcd2bd9-2395-4730-b047-aac75539fb47,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:46.931630 env[1316]: time="2025-10-31T01:14:46.931565993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c78cb78df-xltv5,Uid:3bcad0e7-1720-4160-950e-8a81a3313d2c,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:46.932234 env[1316]: time="2025-10-31T01:14:46.932208567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-r924d,Uid:b7a793cf-29da-4092-aaf4-95f63c307028,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:14:47.097522 env[1316]: time="2025-10-31T01:14:47.097427676Z" level=error msg="Failed to destroy network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.097856 env[1316]: time="2025-10-31T01:14:47.097826825Z" level=error msg="encountered an error cleaning up failed sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.097914 env[1316]: time="2025-10-31T01:14:47.097872542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wj6mp,Uid:50cdc712-db7a-41da-8129-57ca3765d884,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.098517 kubelet[2122]: E1031 01:14:47.098110 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.098517 kubelet[2122]: E1031 01:14:47.098194 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:47.098517 kubelet[2122]: E1031 01:14:47.098217 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wj6mp" Oct 31 01:14:47.098811 kubelet[2122]: E1031 01:14:47.098275 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wj6mp_calico-system(50cdc712-db7a-41da-8129-57ca3765d884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wj6mp_calico-system(50cdc712-db7a-41da-8129-57ca3765d884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:14:47.101464 env[1316]: time="2025-10-31T01:14:47.101383453Z" level=error msg="Failed to destroy network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.102060 env[1316]: time="2025-10-31T01:14:47.102023872Z" level=error msg="encountered an error cleaning up failed sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.102208 env[1316]: time="2025-10-31T01:14:47.102165111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8xhs,Uid:453498c9-0a59-4ad4-bd57-363364a2fea3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.102556 kubelet[2122]: E1031 01:14:47.102505 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.102646 kubelet[2122]: E1031 01:14:47.102558 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8xhs" Oct 31 01:14:47.102646 kubelet[2122]: E1031 01:14:47.102577 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8xhs" Oct 31 01:14:47.102711 kubelet[2122]: E1031 01:14:47.102677 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p8xhs_kube-system(453498c9-0a59-4ad4-bd57-363364a2fea3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p8xhs_kube-system(453498c9-0a59-4ad4-bd57-363364a2fea3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8xhs" podUID="453498c9-0a59-4ad4-bd57-363364a2fea3" Oct 31 01:14:47.103952 env[1316]: time="2025-10-31T01:14:47.103925189Z" level=error msg="Failed to destroy network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.104312 env[1316]: time="2025-10-31T01:14:47.104278923Z" level=error msg="encountered an error cleaning up failed sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.104463 env[1316]: time="2025-10-31T01:14:47.104438897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kx9d2,Uid:aa45bbe1-c342-47d9-b9fb-8fc8197ae119,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.104937 kubelet[2122]: E1031 01:14:47.104746 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.104937 kubelet[2122]: E1031 01:14:47.104807 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kx9d2" Oct 31 01:14:47.104937 kubelet[2122]: E1031 01:14:47.104829 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kx9d2" Oct 31 01:14:47.105047 kubelet[2122]: E1031 01:14:47.104888 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kx9d2_kube-system(aa45bbe1-c342-47d9-b9fb-8fc8197ae119)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kx9d2_kube-system(aa45bbe1-c342-47d9-b9fb-8fc8197ae119)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kx9d2" podUID="aa45bbe1-c342-47d9-b9fb-8fc8197ae119" Oct 31 01:14:47.119638 env[1316]: time="2025-10-31T01:14:47.119559910Z" level=error msg="Failed to destroy network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.120214 env[1316]: time="2025-10-31T01:14:47.120187154Z" level=error msg="encountered an error cleaning up failed sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.120332 env[1316]: time="2025-10-31T01:14:47.120302032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-r924d,Uid:b7a793cf-29da-4092-aaf4-95f63c307028,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.120831 kubelet[2122]: E1031 01:14:47.120716 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.120831 kubelet[2122]: E1031 01:14:47.120776 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86687d576-r924d" Oct 31 01:14:47.121314 kubelet[2122]: E1031 01:14:47.120804 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86687d576-r924d" Oct 31 01:14:47.121314 kubelet[2122]: E1031 01:14:47.121098 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86687d576-r924d_calico-apiserver(b7a793cf-29da-4092-aaf4-95f63c307028)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86687d576-r924d_calico-apiserver(b7a793cf-29da-4092-aaf4-95f63c307028)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:14:47.127994 env[1316]: time="2025-10-31T01:14:47.127922032Z" level=error msg="Failed to destroy network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.128383 env[1316]: time="2025-10-31T01:14:47.128324618Z" level=error msg="encountered an error cleaning up failed sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.128434 env[1316]: time="2025-10-31T01:14:47.128399981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-lcpfh,Uid:7cb997cc-c908-4ddb-9523-a2aea9785811,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.128722 kubelet[2122]: E1031 01:14:47.128658 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.128805 kubelet[2122]: E1031 01:14:47.128751 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" Oct 31 01:14:47.128805 kubelet[2122]: E1031 01:14:47.128782 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" Oct 31 01:14:47.128866 kubelet[2122]: E1031 01:14:47.128831 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86687d576-lcpfh_calico-apiserver(7cb997cc-c908-4ddb-9523-a2aea9785811)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86687d576-lcpfh_calico-apiserver(7cb997cc-c908-4ddb-9523-a2aea9785811)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:14:47.129938 env[1316]: time="2025-10-31T01:14:47.129888704Z" level=error msg="Failed to destroy network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.130270 env[1316]: time="2025-10-31T01:14:47.130233330Z" level=error msg="encountered an error cleaning up failed sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.130323 env[1316]: time="2025-10-31T01:14:47.130278556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c78cb78df-xltv5,Uid:3bcad0e7-1720-4160-950e-8a81a3313d2c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.130491 kubelet[2122]: E1031 01:14:47.130459 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.130581 kubelet[2122]: E1031 01:14:47.130498 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c78cb78df-xltv5" Oct 31 01:14:47.130581 kubelet[2122]: E1031 01:14:47.130518 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c78cb78df-xltv5" Oct 31 01:14:47.130581 kubelet[2122]: E1031 01:14:47.130557 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c78cb78df-xltv5_calico-system(3bcad0e7-1720-4160-950e-8a81a3313d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c78cb78df-xltv5_calico-system(3bcad0e7-1720-4160-950e-8a81a3313d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c78cb78df-xltv5" podUID="3bcad0e7-1720-4160-950e-8a81a3313d2c" Oct 31 01:14:47.135002 env[1316]: time="2025-10-31T01:14:47.134901112Z" level=error msg="Failed to destroy network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.135469 env[1316]: time="2025-10-31T01:14:47.135420320Z" level=error msg="encountered an error cleaning up failed sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.135526 env[1316]: time="2025-10-31T01:14:47.135490133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85445fc7bc-269qr,Uid:cbcd2bd9-2395-4730-b047-aac75539fb47,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.135714 kubelet[2122]: E1031 01:14:47.135678 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.135776 kubelet[2122]: E1031 01:14:47.135727 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" Oct 31 01:14:47.135776 kubelet[2122]: E1031 01:14:47.135751 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" Oct 31 01:14:47.135834 kubelet[2122]: E1031 01:14:47.135798 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85445fc7bc-269qr_calico-system(cbcd2bd9-2395-4730-b047-aac75539fb47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85445fc7bc-269qr_calico-system(cbcd2bd9-2395-4730-b047-aac75539fb47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:14:47.672423 kubelet[2122]: I1031 01:14:47.672379 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:47.673298 env[1316]: time="2025-10-31T01:14:47.673238229Z" level=info msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" Oct 31 01:14:47.673634 kubelet[2122]: I1031 01:14:47.673438 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:14:47.674350 env[1316]: time="2025-10-31T01:14:47.674277166Z" level=info msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" Oct 31 01:14:47.675560 kubelet[2122]: I1031 01:14:47.675316 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:14:47.675727 env[1316]: time="2025-10-31T01:14:47.675679164Z" level=info msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" Oct 31 01:14:47.679130 kubelet[2122]: I1031 01:14:47.679090 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:14:47.681356 env[1316]: time="2025-10-31T01:14:47.680679860Z" level=info msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" Oct 31 01:14:47.681489 kubelet[2122]: I1031 01:14:47.680801 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:14:47.681539 env[1316]: time="2025-10-31T01:14:47.681383799Z" level=info msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" Oct 31 01:14:47.682482 kubelet[2122]: I1031 01:14:47.682435 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:14:47.683425 env[1316]: time="2025-10-31T01:14:47.683335072Z" level=info msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" Oct 31 01:14:47.685469 kubelet[2122]: I1031 01:14:47.685075 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:47.686025 env[1316]: time="2025-10-31T01:14:47.685976618Z" level=info msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" Oct 31 01:14:47.722268 env[1316]: time="2025-10-31T01:14:47.722188494Z" level=error msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" failed" error="failed to destroy network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.722844 kubelet[2122]: E1031 01:14:47.722630 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:47.722844 kubelet[2122]: E1031 01:14:47.722715 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae"} Oct 31 01:14:47.722844 kubelet[2122]: E1031 01:14:47.722786 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bcad0e7-1720-4160-950e-8a81a3313d2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.722844 kubelet[2122]: E1031 01:14:47.722813 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bcad0e7-1720-4160-950e-8a81a3313d2c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c78cb78df-xltv5" podUID="3bcad0e7-1720-4160-950e-8a81a3313d2c" Oct 31 01:14:47.730118 env[1316]: time="2025-10-31T01:14:47.730049662Z" level=error msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" failed" error="failed to destroy network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.730716 kubelet[2122]: E1031 01:14:47.730658 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:14:47.730800 kubelet[2122]: E1031 01:14:47.730732 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae"} Oct 31 01:14:47.730800 kubelet[2122]: E1031 01:14:47.730778 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50cdc712-db7a-41da-8129-57ca3765d884\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.730928 kubelet[2122]: E1031 01:14:47.730809 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50cdc712-db7a-41da-8129-57ca3765d884\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:14:47.744386 env[1316]: time="2025-10-31T01:14:47.744301712Z" level=error msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" failed" error="failed to destroy network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.744670 kubelet[2122]: E1031 01:14:47.744601 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:14:47.744760 kubelet[2122]: E1031 01:14:47.744693 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72"} Oct 31 01:14:47.744760 kubelet[2122]: E1031 01:14:47.744743 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa45bbe1-c342-47d9-b9fb-8fc8197ae119\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.744879 kubelet[2122]: E1031 01:14:47.744781 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa45bbe1-c342-47d9-b9fb-8fc8197ae119\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kx9d2" podUID="aa45bbe1-c342-47d9-b9fb-8fc8197ae119" Oct 31 01:14:47.753720 env[1316]: time="2025-10-31T01:14:47.753597070Z" level=error msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" failed" error="failed to destroy network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.754000 kubelet[2122]: E1031 01:14:47.753946 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:14:47.754087 kubelet[2122]: E1031 01:14:47.754021 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0"} Oct 31 01:14:47.754087 kubelet[2122]: E1031 01:14:47.754073 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"453498c9-0a59-4ad4-bd57-363364a2fea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.754188 kubelet[2122]: E1031 01:14:47.754105 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"453498c9-0a59-4ad4-bd57-363364a2fea3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8xhs" podUID="453498c9-0a59-4ad4-bd57-363364a2fea3" Oct 31 01:14:47.754976 env[1316]: time="2025-10-31T01:14:47.754913785Z" level=error msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" failed" error="failed to destroy network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.755169 kubelet[2122]: E1031 01:14:47.755130 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:14:47.755235 kubelet[2122]: E1031 01:14:47.755176 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b"} Oct 31 01:14:47.755235 kubelet[2122]: E1031 01:14:47.755204 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7cb997cc-c908-4ddb-9523-a2aea9785811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.755342 kubelet[2122]: E1031 01:14:47.755230 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7cb997cc-c908-4ddb-9523-a2aea9785811\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:14:47.759585 env[1316]: time="2025-10-31T01:14:47.759512466Z" level=error msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" failed" error="failed to destroy network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.759840 kubelet[2122]: E1031 01:14:47.759800 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:14:47.759904 kubelet[2122]: E1031 01:14:47.759865 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7"} Oct 31 01:14:47.759936 kubelet[2122]: E1031 01:14:47.759907 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cbcd2bd9-2395-4730-b047-aac75539fb47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.760008 kubelet[2122]: E1031 01:14:47.759933 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cbcd2bd9-2395-4730-b047-aac75539fb47\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:14:47.768259 env[1316]: time="2025-10-31T01:14:47.768187613Z" level=error msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" failed" error="failed to destroy network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:47.768530 kubelet[2122]: E1031 01:14:47.768482 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:47.768620 kubelet[2122]: E1031 01:14:47.768545 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43"} Oct 31 01:14:47.768620 kubelet[2122]: E1031 01:14:47.768586 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7a793cf-29da-4092-aaf4-95f63c307028\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:47.768719 kubelet[2122]: E1031 01:14:47.768631 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7a793cf-29da-4092-aaf4-95f63c307028\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:14:48.488954 env[1316]: time="2025-10-31T01:14:48.488896748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd8js,Uid:bd0bddee-8a85-4f55-a28b-a795608cb1fb,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:48.565172 env[1316]: time="2025-10-31T01:14:48.565107541Z" level=error msg="Failed to destroy network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:48.565522 env[1316]: time="2025-10-31T01:14:48.565486592Z" level=error msg="encountered an error cleaning up failed sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:48.565581 env[1316]: time="2025-10-31T01:14:48.565532831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd8js,Uid:bd0bddee-8a85-4f55-a28b-a795608cb1fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:48.565802 kubelet[2122]: E1031 01:14:48.565763 2122 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:48.565921 kubelet[2122]: E1031 01:14:48.565826 2122 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:48.565983 kubelet[2122]: E1031 01:14:48.565933 2122 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fd8js" Oct 31 01:14:48.566019 kubelet[2122]: E1031 01:14:48.565983 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:48.568181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838-shm.mount: Deactivated successfully. Oct 31 01:14:48.699055 kubelet[2122]: I1031 01:14:48.699011 2122 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:14:48.701129 env[1316]: time="2025-10-31T01:14:48.701074463Z" level=info msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" Oct 31 01:14:48.729713 env[1316]: time="2025-10-31T01:14:48.729641258Z" level=error msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" failed" error="failed to destroy network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:14:48.729954 kubelet[2122]: E1031 01:14:48.729906 2122 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:14:48.730034 kubelet[2122]: E1031 01:14:48.729974 2122 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838"} Oct 31 01:14:48.730034 kubelet[2122]: E1031 01:14:48.730013 2122 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:14:48.730149 kubelet[2122]: E1031 01:14:48.730040 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd0bddee-8a85-4f55-a28b-a795608cb1fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:14:51.947774 kernel: kauditd_printk_skb: 25 callbacks suppressed Oct 31 01:14:51.947933 kernel: audit: type=1130 audit(1761873291.937:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:41678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:51.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:41678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:51.937551 systemd[1]: Started sshd@7-10.0.0.95:22-10.0.0.1:41678.service. Oct 31 01:14:51.973000 audit[3345]: USER_ACCT pid=3345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.978857 sshd[3345]: Accepted publickey for core from 10.0.0.1 port 41678 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:14:51.989789 kernel: audit: type=1101 audit(1761873291.973:292): pid=3345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.990055 kernel: audit: type=1103 audit(1761873291.981:293): pid=3345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.981000 audit[3345]: CRED_ACQ pid=3345 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.989446 systemd[1]: Started session-8.scope. Oct 31 01:14:51.983039 sshd[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:14:51.995251 kernel: audit: type=1006 audit(1761873291.981:294): pid=3345 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Oct 31 01:14:51.990671 systemd-logind[1300]: New session 8 of user core. Oct 31 01:14:51.981000 audit[3345]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf1349a30 a2=3 a3=0 items=0 ppid=1 pid=3345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:51.981000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:14:52.005406 kernel: audit: type=1300 audit(1761873291.981:294): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf1349a30 a2=3 a3=0 items=0 ppid=1 pid=3345 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:52.007068 kernel: audit: type=1327 audit(1761873291.981:294): proctitle=737368643A20636F7265205B707269765D Oct 31 01:14:52.007117 kernel: audit: type=1105 audit(1761873291.995:295): pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.995000 audit[3345]: USER_START pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:51.997000 audit[3348]: CRED_ACQ pid=3348 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.030290 kernel: audit: type=1103 audit(1761873291.997:296): pid=3348 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.121589 sshd[3345]: pam_unix(sshd:session): session closed for user core Oct 31 01:14:52.122000 audit[3345]: USER_END pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.124256 systemd[1]: sshd@7-10.0.0.95:22-10.0.0.1:41678.service: Deactivated successfully. Oct 31 01:14:52.124998 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 01:14:52.122000 audit[3345]: CRED_DISP pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.137124 kernel: audit: type=1106 audit(1761873292.122:297): pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.137187 kernel: audit: type=1104 audit(1761873292.122:298): pid=3345 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:52.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.95:22-10.0.0.1:41678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:52.137843 systemd-logind[1300]: Session 8 logged out. Waiting for processes to exit. Oct 31 01:14:52.138626 systemd-logind[1300]: Removed session 8. Oct 31 01:14:53.084058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42414794.mount: Deactivated successfully. Oct 31 01:14:54.016996 env[1316]: time="2025-10-31T01:14:54.016924062Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:54.018927 env[1316]: time="2025-10-31T01:14:54.018903709Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:54.020359 env[1316]: time="2025-10-31T01:14:54.020293617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:54.021944 env[1316]: time="2025-10-31T01:14:54.021898242Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 01:14:54.022288 env[1316]: time="2025-10-31T01:14:54.022251152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 01:14:54.030662 env[1316]: time="2025-10-31T01:14:54.030575351Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 01:14:54.048887 env[1316]: time="2025-10-31T01:14:54.048807625Z" level=info msg="CreateContainer within sandbox \"ea18f47df5bab96a3b893f5407c3010cfd2a1f1c3f62aa1c9ae8e1174c7501d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"deaf471bcc9366e2a48e55fd40816fbdafc618d63240cc136b0efd06e7d1fcb5\"" Oct 31 01:14:54.049434 env[1316]: time="2025-10-31T01:14:54.049390232Z" level=info msg="StartContainer for \"deaf471bcc9366e2a48e55fd40816fbdafc618d63240cc136b0efd06e7d1fcb5\"" Oct 31 01:14:54.200068 env[1316]: time="2025-10-31T01:14:54.199992021Z" level=info msg="StartContainer for \"deaf471bcc9366e2a48e55fd40816fbdafc618d63240cc136b0efd06e7d1fcb5\" returns successfully" Oct 31 01:14:54.242130 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 01:14:54.242357 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 01:14:54.368138 env[1316]: time="2025-10-31T01:14:54.367993995Z" level=info msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.471 [INFO][3425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.471 [INFO][3425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" iface="eth0" netns="/var/run/netns/cni-785b1780-d32f-2251-10d9-ca8f770d5660" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.472 [INFO][3425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" iface="eth0" netns="/var/run/netns/cni-785b1780-d32f-2251-10d9-ca8f770d5660" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.472 [INFO][3425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" iface="eth0" netns="/var/run/netns/cni-785b1780-d32f-2251-10d9-ca8f770d5660" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.472 [INFO][3425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.472 [INFO][3425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.532 [INFO][3435] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.532 [INFO][3435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.533 [INFO][3435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.539 [WARNING][3435] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.539 [INFO][3435] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.541 [INFO][3435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:14:54.546263 env[1316]: 2025-10-31 01:14:54.544 [INFO][3425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:14:54.549383 systemd[1]: run-netns-cni\x2d785b1780\x2dd32f\x2d2251\x2d10d9\x2dca8f770d5660.mount: Deactivated successfully. Oct 31 01:14:54.550806 env[1316]: time="2025-10-31T01:14:54.550745986Z" level=info msg="TearDown network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" successfully" Oct 31 01:14:54.550806 env[1316]: time="2025-10-31T01:14:54.550802833Z" level=info msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" returns successfully" Oct 31 01:14:54.651097 kubelet[2122]: I1031 01:14:54.650955 2122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-backend-key-pair\") pod \"3bcad0e7-1720-4160-950e-8a81a3313d2c\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " Oct 31 01:14:54.654737 kubelet[2122]: I1031 01:14:54.654687 2122 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3bcad0e7-1720-4160-950e-8a81a3313d2c" (UID: "3bcad0e7-1720-4160-950e-8a81a3313d2c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 01:14:54.656251 systemd[1]: var-lib-kubelet-pods-3bcad0e7\x2d1720\x2d4160\x2d950e\x2d8a81a3313d2c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 01:14:54.720539 kubelet[2122]: E1031 01:14:54.720055 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:54.737673 kubelet[2122]: I1031 01:14:54.737582 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9stjm" podStartSLOduration=2.01181657 podStartE2EDuration="20.737558021s" podCreationTimestamp="2025-10-31 01:14:34 +0000 UTC" firstStartedPulling="2025-10-31 01:14:35.297311223 +0000 UTC m=+22.914764986" lastFinishedPulling="2025-10-31 01:14:54.023052674 +0000 UTC m=+41.640506437" observedRunningTime="2025-10-31 01:14:54.737167189 +0000 UTC m=+42.354620952" watchObservedRunningTime="2025-10-31 01:14:54.737558021 +0000 UTC m=+42.355011784" Oct 31 01:14:54.751518 kubelet[2122]: I1031 01:14:54.751478 2122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-ca-bundle\") pod \"3bcad0e7-1720-4160-950e-8a81a3313d2c\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " Oct 31 01:14:54.751647 kubelet[2122]: I1031 01:14:54.751578 2122 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzgc4\" (UniqueName: \"kubernetes.io/projected/3bcad0e7-1720-4160-950e-8a81a3313d2c-kube-api-access-fzgc4\") pod \"3bcad0e7-1720-4160-950e-8a81a3313d2c\" (UID: \"3bcad0e7-1720-4160-950e-8a81a3313d2c\") " Oct 31 01:14:54.751855 kubelet[2122]: I1031 01:14:54.751807 2122 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 01:14:54.751891 kubelet[2122]: I1031 01:14:54.751848 2122 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3bcad0e7-1720-4160-950e-8a81a3313d2c" (UID: "3bcad0e7-1720-4160-950e-8a81a3313d2c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 01:14:54.754111 kubelet[2122]: I1031 01:14:54.754074 2122 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcad0e7-1720-4160-950e-8a81a3313d2c-kube-api-access-fzgc4" (OuterVolumeSpecName: "kube-api-access-fzgc4") pod "3bcad0e7-1720-4160-950e-8a81a3313d2c" (UID: "3bcad0e7-1720-4160-950e-8a81a3313d2c"). InnerVolumeSpecName "kube-api-access-fzgc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 01:14:54.756320 systemd[1]: var-lib-kubelet-pods-3bcad0e7\x2d1720\x2d4160\x2d950e\x2d8a81a3313d2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfzgc4.mount: Deactivated successfully. Oct 31 01:14:54.852555 kubelet[2122]: I1031 01:14:54.852497 2122 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bcad0e7-1720-4160-950e-8a81a3313d2c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 01:14:54.852555 kubelet[2122]: I1031 01:14:54.852540 2122 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fzgc4\" (UniqueName: \"kubernetes.io/projected/3bcad0e7-1720-4160-950e-8a81a3313d2c-kube-api-access-fzgc4\") on node \"localhost\" DevicePath \"\"" Oct 31 01:14:55.154092 kubelet[2122]: I1031 01:14:55.154044 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9f314ab5-dad4-417f-bff7-f3843175cd3e-whisker-backend-key-pair\") pod \"whisker-86b4655b9-f4c4n\" (UID: \"9f314ab5-dad4-417f-bff7-f3843175cd3e\") " pod="calico-system/whisker-86b4655b9-f4c4n" Oct 31 01:14:55.154313 kubelet[2122]: I1031 01:14:55.154113 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gq2m\" (UniqueName: \"kubernetes.io/projected/9f314ab5-dad4-417f-bff7-f3843175cd3e-kube-api-access-7gq2m\") pod \"whisker-86b4655b9-f4c4n\" (UID: \"9f314ab5-dad4-417f-bff7-f3843175cd3e\") " pod="calico-system/whisker-86b4655b9-f4c4n" Oct 31 01:14:55.154313 kubelet[2122]: I1031 01:14:55.154144 2122 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f314ab5-dad4-417f-bff7-f3843175cd3e-whisker-ca-bundle\") pod \"whisker-86b4655b9-f4c4n\" (UID: \"9f314ab5-dad4-417f-bff7-f3843175cd3e\") " pod="calico-system/whisker-86b4655b9-f4c4n" Oct 31 01:14:55.415395 env[1316]: time="2025-10-31T01:14:55.415353533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b4655b9-f4c4n,Uid:9f314ab5-dad4-417f-bff7-f3843175cd3e,Namespace:calico-system,Attempt:0,}" Oct 31 01:14:55.521902 systemd-networkd[1085]: calie118da01f00: Link UP Oct 31 01:14:55.526181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:14:55.526454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie118da01f00: link becomes ready Oct 31 01:14:55.526386 systemd-networkd[1085]: calie118da01f00: Gained carrier Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.448 [INFO][3457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.458 [INFO][3457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86b4655b9--f4c4n-eth0 whisker-86b4655b9- calico-system 9f314ab5-dad4-417f-bff7-f3843175cd3e 951 0 2025-10-31 01:14:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86b4655b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86b4655b9-f4c4n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie118da01f00 [] [] }} ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.459 [INFO][3457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.481 [INFO][3473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" HandleID="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Workload="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.481 [INFO][3473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" HandleID="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Workload="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86b4655b9-f4c4n", "timestamp":"2025-10-31 01:14:55.481138627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.481 [INFO][3473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.481 [INFO][3473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.481 [INFO][3473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.489 [INFO][3473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.493 [INFO][3473] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.496 [INFO][3473] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.497 [INFO][3473] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.499 [INFO][3473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.499 [INFO][3473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.500 [INFO][3473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.506 [INFO][3473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.511 [INFO][3473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.511 [INFO][3473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" host="localhost" Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.511 [INFO][3473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:14:55.532477 env[1316]: 2025-10-31 01:14:55.511 [INFO][3473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" HandleID="k8s-pod-network.840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Workload="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.513 [INFO][3457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b4655b9--f4c4n-eth0", GenerateName:"whisker-86b4655b9-", Namespace:"calico-system", SelfLink:"", UID:"9f314ab5-dad4-417f-bff7-f3843175cd3e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b4655b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86b4655b9-f4c4n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie118da01f00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.514 [INFO][3457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.514 [INFO][3457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie118da01f00 ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.522 [INFO][3457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.522 [INFO][3457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86b4655b9--f4c4n-eth0", GenerateName:"whisker-86b4655b9-", Namespace:"calico-system", SelfLink:"", UID:"9f314ab5-dad4-417f-bff7-f3843175cd3e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86b4655b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c", Pod:"whisker-86b4655b9-f4c4n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie118da01f00", MAC:"46:ad:94:87:fc:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:14:55.533124 env[1316]: 2025-10-31 01:14:55.530 [INFO][3457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c" Namespace="calico-system" Pod="whisker-86b4655b9-f4c4n" WorkloadEndpoint="localhost-k8s-whisker--86b4655b9--f4c4n-eth0" Oct 31 01:14:55.544377 env[1316]: time="2025-10-31T01:14:55.544276228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:55.544377 env[1316]: time="2025-10-31T01:14:55.544316705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:55.544635 env[1316]: time="2025-10-31T01:14:55.544326934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:55.544707 env[1316]: time="2025-10-31T01:14:55.544638475Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c pid=3496 runtime=io.containerd.runc.v2 Oct 31 01:14:55.566193 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:14:55.608646 env[1316]: time="2025-10-31T01:14:55.605075922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86b4655b9-f4c4n,Uid:9f314ab5-dad4-417f-bff7-f3843175cd3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"840f68fb4bbf87ec322c9512463a046af8de7cea1bbacb030530f6f1140b236c\"" Oct 31 01:14:55.608646 env[1316]: time="2025-10-31T01:14:55.606578944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:14:55.609000 audit[3555]: AVC avc: denied { write } for pid=3555 comm="tee" name="fd" dev="proc" ino=23323 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.609000 audit[3555]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff1c1957d9 a2=241 a3=1b6 items=1 ppid=3538 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.609000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 31 01:14:55.609000 audit: PATH item=0 name="/dev/fd/63" inode=24438 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.615000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=23332 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.615000 audit[3574]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1fa817da a2=241 a3=1b6 items=1 ppid=3536 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.615000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 01:14:55.615000 audit: PATH item=0 name="/dev/fd/63" inode=23314 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.615000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.621000 audit[3592]: AVC avc: denied { write } for pid=3592 comm="tee" name="fd" dev="proc" ino=25637 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.621000 audit[3592]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe0e2847ea a2=241 a3=1b6 items=1 ppid=3540 pid=3592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.621000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 31 01:14:55.621000 audit: PATH item=0 name="/dev/fd/63" inode=24447 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.624000 audit[3611]: AVC avc: denied { write } for pid=3611 comm="tee" name="fd" dev="proc" ino=25641 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.624000 audit[3611]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcd94797e9 a2=241 a3=1b6 items=1 ppid=3543 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.624000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 31 01:14:55.624000 audit: PATH item=0 name="/dev/fd/63" inode=23337 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.626000 audit[3606]: AVC avc: denied { write } for pid=3606 comm="tee" name="fd" dev="proc" ino=25320 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.626000 audit[3606]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeebc137eb a2=241 a3=1b6 items=1 ppid=3531 pid=3606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.626000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 31 01:14:55.626000 audit: PATH item=0 name="/dev/fd/63" inode=24460 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.626000 audit[3600]: AVC avc: denied { write } for pid=3600 comm="tee" name="fd" dev="proc" ino=25645 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.626000 audit[3600]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd19a847e9 a2=241 a3=1b6 items=1 ppid=3533 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.626000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 31 01:14:55.626000 audit: PATH item=0 name="/dev/fd/63" inode=23329 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.626000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.667000 audit[3618]: AVC avc: denied { write } for pid=3618 comm="tee" name="fd" dev="proc" ino=24464 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 01:14:55.667000 audit[3618]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5a1d77e9 a2=241 a3=1b6 items=1 ppid=3534 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:55.667000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 31 01:14:55.667000 audit: PATH item=0 name="/dev/fd/63" inode=23338 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 01:14:55.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 01:14:55.722179 kubelet[2122]: E1031 01:14:55.722130 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:55.994805 env[1316]: time="2025-10-31T01:14:55.994633697Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:14:55.996176 env[1316]: time="2025-10-31T01:14:55.996086384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:14:55.996415 kubelet[2122]: E1031 01:14:55.996360 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:14:55.996515 kubelet[2122]: E1031 01:14:55.996427 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:14:55.997601 kubelet[2122]: E1031 01:14:55.997559 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c280e58b5284c02a79bc96b4b32937d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:14:55.999475 env[1316]: time="2025-10-31T01:14:55.999423467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:14:56.326182 env[1316]: time="2025-10-31T01:14:56.326046484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:14:56.327172 env[1316]: time="2025-10-31T01:14:56.327116674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:14:56.327450 kubelet[2122]: E1031 01:14:56.327391 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:14:56.327450 kubelet[2122]: E1031 01:14:56.327454 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:14:56.327666 kubelet[2122]: E1031 01:14:56.327574 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:14:56.329528 kubelet[2122]: E1031 01:14:56.329488 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:14:56.487989 kubelet[2122]: I1031 01:14:56.487938 2122 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcad0e7-1720-4160-950e-8a81a3313d2c" path="/var/lib/kubelet/pods/3bcad0e7-1720-4160-950e-8a81a3313d2c/volumes" Oct 31 01:14:56.727592 kubelet[2122]: E1031 01:14:56.727552 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:14:56.730236 kubelet[2122]: E1031 01:14:56.730158 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:14:56.757813 systemd[1]: run-containerd-runc-k8s.io-deaf471bcc9366e2a48e55fd40816fbdafc618d63240cc136b0efd06e7d1fcb5-runc.iYKh35.mount: Deactivated successfully. Oct 31 01:14:56.779000 audit[3676]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=3676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:56.779000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc2275fff0 a2=0 a3=7ffc2275ffdc items=0 ppid=2275 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:56.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:56.786000 audit[3676]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:14:56.786000 audit[3676]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2275fff0 a2=0 a3=0 items=0 ppid=2275 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:56.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:14:57.124469 systemd[1]: Started sshd@8-10.0.0.95:22-10.0.0.1:41682.service. Oct 31 01:14:57.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:41682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:57.131661 kernel: kauditd_printk_skb: 42 callbacks suppressed Oct 31 01:14:57.131759 kernel: audit: type=1130 audit(1761873297.124:309): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:41682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:57.157000 audit[3697]: USER_ACCT pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.158199 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 41682 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:14:57.160184 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:14:57.165278 systemd-logind[1300]: New session 9 of user core. Oct 31 01:14:57.159000 audit[3697]: CRED_ACQ pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.166782 systemd[1]: Started session-9.scope. Oct 31 01:14:57.173492 kernel: audit: type=1101 audit(1761873297.157:310): pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.173579 kernel: audit: type=1103 audit(1761873297.159:311): pid=3697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.173620 kernel: audit: type=1006 audit(1761873297.159:312): pid=3697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Oct 31 01:14:57.177592 kernel: audit: type=1300 audit(1761873297.159:312): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd85b2440 a2=3 a3=0 items=0 ppid=1 pid=3697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:57.159000 audit[3697]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd85b2440 a2=3 a3=0 items=0 ppid=1 pid=3697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:14:57.159000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:14:57.186869 kernel: audit: type=1327 audit(1761873297.159:312): proctitle=737368643A20636F7265205B707269765D Oct 31 01:14:57.186916 kernel: audit: type=1105 audit(1761873297.172:313): pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.172000 audit[3697]: USER_START pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.174000 audit[3700]: CRED_ACQ pid=3700 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.200217 kernel: audit: type=1103 audit(1761873297.174:314): pid=3700 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.273844 sshd[3697]: pam_unix(sshd:session): session closed for user core Oct 31 01:14:57.274000 audit[3697]: USER_END pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.276293 systemd[1]: sshd@8-10.0.0.95:22-10.0.0.1:41682.service: Deactivated successfully. Oct 31 01:14:57.277241 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 01:14:57.277639 systemd-logind[1300]: Session 9 logged out. Waiting for processes to exit. Oct 31 01:14:57.278274 systemd-logind[1300]: Removed session 9. Oct 31 01:14:57.274000 audit[3697]: CRED_DISP pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.289364 kernel: audit: type=1106 audit(1761873297.274:315): pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.289417 kernel: audit: type=1104 audit(1761873297.274:316): pid=3697 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:14:57.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.95:22-10.0.0.1:41682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:14:57.569783 systemd-networkd[1085]: calie118da01f00: Gained IPv6LL Oct 31 01:14:59.486579 env[1316]: time="2025-10-31T01:14:59.486525713Z" level=info msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.523 [INFO][3772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.523 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" iface="eth0" netns="/var/run/netns/cni-e03d25f5-6bf4-4429-e326-285d8415256b" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.524 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" iface="eth0" netns="/var/run/netns/cni-e03d25f5-6bf4-4429-e326-285d8415256b" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.524 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" iface="eth0" netns="/var/run/netns/cni-e03d25f5-6bf4-4429-e326-285d8415256b" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.524 [INFO][3772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.524 [INFO][3772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.543 [INFO][3780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.543 [INFO][3780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.543 [INFO][3780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.549 [WARNING][3780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.549 [INFO][3780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.551 [INFO][3780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:14:59.555128 env[1316]: 2025-10-31 01:14:59.553 [INFO][3772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:14:59.556192 env[1316]: time="2025-10-31T01:14:59.555274796Z" level=info msg="TearDown network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" successfully" Oct 31 01:14:59.556192 env[1316]: time="2025-10-31T01:14:59.555324420Z" level=info msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" returns successfully" Oct 31 01:14:59.556192 env[1316]: time="2025-10-31T01:14:59.555958612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-r924d,Uid:b7a793cf-29da-4092-aaf4-95f63c307028,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:14:59.558762 systemd[1]: run-netns-cni\x2de03d25f5\x2d6bf4\x2d4429\x2de326\x2d285d8415256b.mount: Deactivated successfully. Oct 31 01:14:59.655546 systemd-networkd[1085]: caliac2d4a9bacb: Link UP Oct 31 01:14:59.659429 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:14:59.659472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliac2d4a9bacb: link becomes ready Oct 31 01:14:59.659666 systemd-networkd[1085]: caliac2d4a9bacb: Gained carrier Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.587 [INFO][3787] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.599 [INFO][3787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86687d576--r924d-eth0 calico-apiserver-86687d576- calico-apiserver b7a793cf-29da-4092-aaf4-95f63c307028 1002 0 2025-10-31 01:14:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86687d576 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86687d576-r924d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac2d4a9bacb [] [] }} ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.599 [INFO][3787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.620 [INFO][3803] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" HandleID="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.621 [INFO][3803] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" HandleID="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86687d576-r924d", "timestamp":"2025-10-31 01:14:59.620947106 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.621 [INFO][3803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.621 [INFO][3803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.621 [INFO][3803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.628 [INFO][3803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.632 [INFO][3803] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.636 [INFO][3803] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.638 [INFO][3803] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.640 [INFO][3803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.640 [INFO][3803] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.642 [INFO][3803] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2 Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.645 [INFO][3803] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.651 [INFO][3803] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.651 [INFO][3803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" host="localhost" Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.651 [INFO][3803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:14:59.673690 env[1316]: 2025-10-31 01:14:59.651 [INFO][3803] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" HandleID="k8s-pod-network.2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.653 [INFO][3787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--r924d-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a793cf-29da-4092-aaf4-95f63c307028", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86687d576-r924d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac2d4a9bacb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.653 [INFO][3787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.653 [INFO][3787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac2d4a9bacb ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.659 [INFO][3787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.660 [INFO][3787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--r924d-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a793cf-29da-4092-aaf4-95f63c307028", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2", Pod:"calico-apiserver-86687d576-r924d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac2d4a9bacb", MAC:"be:05:e9:ce:3f:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:14:59.674236 env[1316]: 2025-10-31 01:14:59.671 [INFO][3787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-r924d" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:14:59.686878 env[1316]: time="2025-10-31T01:14:59.686782646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:14:59.686878 env[1316]: time="2025-10-31T01:14:59.686854241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:14:59.686972 env[1316]: time="2025-10-31T01:14:59.686895510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:14:59.687246 env[1316]: time="2025-10-31T01:14:59.687182534Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2 pid=3829 runtime=io.containerd.runc.v2 Oct 31 01:14:59.716145 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:14:59.739685 env[1316]: time="2025-10-31T01:14:59.739510979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-r924d,Uid:b7a793cf-29da-4092-aaf4-95f63c307028,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2\"" Oct 31 01:14:59.741948 env[1316]: time="2025-10-31T01:14:59.741893034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:00.217955 env[1316]: time="2025-10-31T01:15:00.217865909Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:00.247463 env[1316]: time="2025-10-31T01:15:00.247373883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:00.247714 kubelet[2122]: E1031 01:15:00.247665 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:00.248040 kubelet[2122]: E1031 01:15:00.247719 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:00.248040 kubelet[2122]: E1031 01:15:00.247865 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9jfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-r924d_calico-apiserver(b7a793cf-29da-4092-aaf4-95f63c307028): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:00.249104 kubelet[2122]: E1031 01:15:00.249047 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:00.487629 env[1316]: time="2025-10-31T01:15:00.487465906Z" level=info msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.530 [INFO][3897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.530 [INFO][3897] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" iface="eth0" netns="/var/run/netns/cni-44a43bba-e9a5-7760-add4-9ae9065925b4" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.531 [INFO][3897] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" iface="eth0" netns="/var/run/netns/cni-44a43bba-e9a5-7760-add4-9ae9065925b4" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.531 [INFO][3897] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" iface="eth0" netns="/var/run/netns/cni-44a43bba-e9a5-7760-add4-9ae9065925b4" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.531 [INFO][3897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.531 [INFO][3897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.552 [INFO][3906] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.552 [INFO][3906] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.552 [INFO][3906] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.558 [WARNING][3906] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.558 [INFO][3906] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.559 [INFO][3906] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:00.563483 env[1316]: 2025-10-31 01:15:00.561 [INFO][3897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:00.563990 env[1316]: time="2025-10-31T01:15:00.563641347Z" level=info msg="TearDown network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" successfully" Oct 31 01:15:00.563990 env[1316]: time="2025-10-31T01:15:00.563682335Z" level=info msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" returns successfully" Oct 31 01:15:00.564065 kubelet[2122]: E1031 01:15:00.564044 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:00.564769 env[1316]: time="2025-10-31T01:15:00.564737815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8xhs,Uid:453498c9-0a59-4ad4-bd57-363364a2fea3,Namespace:kube-system,Attempt:1,}" Oct 31 01:15:00.567150 systemd[1]: run-netns-cni\x2d44a43bba\x2de9a5\x2d7760\x2dadd4\x2d9ae9065925b4.mount: Deactivated successfully. Oct 31 01:15:00.669770 systemd-networkd[1085]: cali7dc4eb3f79c: Link UP Oct 31 01:15:00.673836 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:15:00.673903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7dc4eb3f79c: link becomes ready Oct 31 01:15:00.673928 systemd-networkd[1085]: cali7dc4eb3f79c: Gained carrier Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.598 [INFO][3915] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.611 [INFO][3915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0 coredns-668d6bf9bc- kube-system 453498c9-0a59-4ad4-bd57-363364a2fea3 1012 0 2025-10-31 01:14:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-p8xhs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7dc4eb3f79c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.611 [INFO][3915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.634 [INFO][3929] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" HandleID="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.634 [INFO][3929] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" HandleID="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001356c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-p8xhs", "timestamp":"2025-10-31 01:15:00.634777547 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.634 [INFO][3929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.634 [INFO][3929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.635 [INFO][3929] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.641 [INFO][3929] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.646 [INFO][3929] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.651 [INFO][3929] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.653 [INFO][3929] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.655 [INFO][3929] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.655 [INFO][3929] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.656 [INFO][3929] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679 Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.660 [INFO][3929] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.665 [INFO][3929] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.665 [INFO][3929] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" host="localhost" Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.665 [INFO][3929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:00.688197 env[1316]: 2025-10-31 01:15:00.665 [INFO][3929] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" HandleID="k8s-pod-network.bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.668 [INFO][3915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"453498c9-0a59-4ad4-bd57-363364a2fea3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-p8xhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dc4eb3f79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.668 [INFO][3915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.668 [INFO][3915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7dc4eb3f79c ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.673 [INFO][3915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.674 [INFO][3915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"453498c9-0a59-4ad4-bd57-363364a2fea3", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679", Pod:"coredns-668d6bf9bc-p8xhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dc4eb3f79c", MAC:"aa:9b:1a:95:f0:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:00.689419 env[1316]: 2025-10-31 01:15:00.686 [INFO][3915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8xhs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:00.698188 env[1316]: time="2025-10-31T01:15:00.698120918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:00.698188 env[1316]: time="2025-10-31T01:15:00.698156967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:00.698188 env[1316]: time="2025-10-31T01:15:00.698170101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:00.698417 env[1316]: time="2025-10-31T01:15:00.698334483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679 pid=3952 runtime=io.containerd.runc.v2 Oct 31 01:15:00.725363 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:00.735435 kubelet[2122]: E1031 01:15:00.735139 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:00.741250 kubelet[2122]: I1031 01:15:00.738843 2122 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:15:00.741250 kubelet[2122]: E1031 01:15:00.739275 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:00.765484 env[1316]: time="2025-10-31T01:15:00.763047089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8xhs,Uid:453498c9-0a59-4ad4-bd57-363364a2fea3,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679\"" Oct 31 01:15:00.765660 kubelet[2122]: E1031 01:15:00.764110 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:00.769651 env[1316]: time="2025-10-31T01:15:00.768494343Z" level=info msg="CreateContainer within sandbox \"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:15:00.769000 audit[3987]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=3987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:00.769000 audit[3987]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce9037d20 a2=0 a3=7ffce9037d0c items=0 ppid=2275 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:00.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:00.775000 audit[3987]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=3987 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:00.775000 audit[3987]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce9037d20 a2=0 a3=0 items=0 ppid=2275 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:00.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:00.786434 env[1316]: time="2025-10-31T01:15:00.786399598Z" level=info msg="CreateContainer within sandbox \"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c26668642a7c9b2a95b826aa81896b838f4166639b4158f5e7058280cd6248a\"" Oct 31 01:15:00.788274 env[1316]: time="2025-10-31T01:15:00.788226911Z" level=info msg="StartContainer for \"8c26668642a7c9b2a95b826aa81896b838f4166639b4158f5e7058280cd6248a\"" Oct 31 01:15:00.790000 audit[3989]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3989 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:00.790000 audit[3989]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedfe528c0 a2=0 a3=7ffedfe528ac items=0 ppid=2275 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:00.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:00.796000 audit[3989]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3989 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:00.796000 audit[3989]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffedfe528c0 a2=0 a3=7ffedfe528ac items=0 ppid=2275 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:00.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:00.830777 env[1316]: time="2025-10-31T01:15:00.830728933Z" level=info msg="StartContainer for \"8c26668642a7c9b2a95b826aa81896b838f4166639b4158f5e7058280cd6248a\" returns successfully" Oct 31 01:15:00.897775 systemd-networkd[1085]: caliac2d4a9bacb: Gained IPv6LL Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.244000 audit: BPF prog-id=10 op=LOAD Oct 31 01:15:01.244000 audit[4058]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee5c92940 a2=98 a3=1fffffffffffffff items=0 ppid=4034 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.244000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:15:01.245000 audit: BPF prog-id=10 op=UNLOAD Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit: BPF prog-id=11 op=LOAD Oct 31 01:15:01.245000 audit[4058]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee5c92820 a2=94 a3=3 items=0 ppid=4034 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.245000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:15:01.245000 audit: BPF prog-id=11 op=UNLOAD Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { bpf } for pid=4058 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit: BPF prog-id=12 op=LOAD Oct 31 01:15:01.245000 audit[4058]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee5c92860 a2=94 a3=7ffee5c92a40 items=0 ppid=4034 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.245000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:15:01.245000 audit: BPF prog-id=12 op=UNLOAD Oct 31 01:15:01.245000 audit[4058]: AVC avc: denied { perfmon } for pid=4058 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.245000 audit[4058]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffee5c92930 a2=50 a3=a000000085 items=0 ppid=4034 pid=4058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.245000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit: BPF prog-id=13 op=LOAD Oct 31 01:15:01.247000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2da86930 a2=98 a3=3 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.247000 audit: BPF prog-id=13 op=UNLOAD Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit: BPF prog-id=14 op=LOAD Oct 31 01:15:01.247000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2da86720 a2=94 a3=54428f items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.247000 audit: BPF prog-id=14 op=UNLOAD Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.247000 audit: BPF prog-id=15 op=LOAD Oct 31 01:15:01.247000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2da86750 a2=94 a3=2 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.247000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.247000 audit: BPF prog-id=15 op=UNLOAD Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit: BPF prog-id=16 op=LOAD Oct 31 01:15:01.371000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2da86610 a2=94 a3=1 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.371000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.371000 audit: BPF prog-id=16 op=UNLOAD Oct 31 01:15:01.371000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.371000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc2da866e0 a2=50 a3=7ffc2da867c0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.371000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da86620 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2da86650 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2da86560 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da86670 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da86650 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da86640 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da86670 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2da86650 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2da86670 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2da86640 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2da866b0 a2=28 a3=0 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2da86460 a2=50 a3=1 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.380000 audit: BPF prog-id=17 op=LOAD Oct 31 01:15:01.380000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc2da86460 a2=94 a3=5 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.380000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.381000 audit: BPF prog-id=17 op=UNLOAD Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2da86510 a2=50 a3=1 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc2da86630 a2=4 a3=38 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { confidentiality } for pid=4059 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.381000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2da86680 a2=94 a3=6 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { confidentiality } for pid=4059 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.381000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2da85e30 a2=94 a3=88 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { perfmon } for pid=4059 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { bpf } for pid=4059 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.381000 audit[4059]: AVC avc: denied { confidentiality } for pid=4059 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.381000 audit[4059]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2da85e30 a2=94 a3=88 items=0 ppid=4034 pid=4059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.381000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit: BPF prog-id=18 op=LOAD Oct 31 01:15:01.389000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdec91410 a2=98 a3=1999999999999999 items=0 ppid=4034 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.389000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:15:01.389000 audit: BPF prog-id=18 op=UNLOAD Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit: BPF prog-id=19 op=LOAD Oct 31 01:15:01.389000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdec912f0 a2=94 a3=ffff items=0 ppid=4034 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.389000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:15:01.389000 audit: BPF prog-id=19 op=UNLOAD Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { perfmon } for pid=4091 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit[4091]: AVC avc: denied { bpf } for pid=4091 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.389000 audit: BPF prog-id=20 op=LOAD Oct 31 01:15:01.389000 audit[4091]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcdec91330 a2=94 a3=7ffcdec91510 items=0 ppid=4034 pid=4091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.389000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 01:15:01.389000 audit: BPF prog-id=20 op=UNLOAD Oct 31 01:15:01.445475 systemd-networkd[1085]: vxlan.calico: Link UP Oct 31 01:15:01.445483 systemd-networkd[1085]: vxlan.calico: Gained carrier Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.463000 audit: BPF prog-id=21 op=LOAD Oct 31 01:15:01.463000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc70ca1fe0 a2=98 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.463000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit: BPF prog-id=21 op=UNLOAD Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit: BPF prog-id=22 op=LOAD Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc70ca1df0 a2=94 a3=54428f items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit: BPF prog-id=22 op=UNLOAD Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit: BPF prog-id=23 op=LOAD Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc70ca1e20 a2=94 a3=2 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit: BPF prog-id=23 op=UNLOAD Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1cf0 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc70ca1d20 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc70ca1c30 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1d40 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1d20 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1d10 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1d40 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc70ca1d20 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc70ca1d40 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc70ca1d10 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc70ca1d80 a2=28 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.464000 audit: BPF prog-id=24 op=LOAD Oct 31 01:15:01.464000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc70ca1bf0 a2=94 a3=0 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.464000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.465000 audit: BPF prog-id=24 op=UNLOAD Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc70ca1be0 a2=50 a3=2800 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.465000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc70ca1be0 a2=50 a3=2800 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.465000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit: BPF prog-id=25 op=LOAD Oct 31 01:15:01.465000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc70ca1400 a2=94 a3=2 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.465000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.465000 audit: BPF prog-id=25 op=UNLOAD Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { perfmon } for pid=4115 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit[4115]: AVC avc: denied { bpf } for pid=4115 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.465000 audit: BPF prog-id=26 op=LOAD Oct 31 01:15:01.465000 audit[4115]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc70ca1500 a2=94 a3=30 items=0 ppid=4034 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.465000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit: BPF prog-id=27 op=LOAD Oct 31 01:15:01.478000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffa2e8d500 a2=98 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.478000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.478000 audit: BPF prog-id=27 op=UNLOAD Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit: BPF prog-id=28 op=LOAD Oct 31 01:15:01.478000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa2e8d2f0 a2=94 a3=54428f items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.478000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.478000 audit: BPF prog-id=28 op=UNLOAD Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.478000 audit: BPF prog-id=29 op=LOAD Oct 31 01:15:01.478000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa2e8d320 a2=94 a3=2 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.478000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.478000 audit: BPF prog-id=29 op=UNLOAD Oct 31 01:15:01.487211 env[1316]: time="2025-10-31T01:15:01.487156907Z" level=info msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" Oct 31 01:15:01.487556 env[1316]: time="2025-10-31T01:15:01.487515507Z" level=info msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.552 [INFO][4151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.552 [INFO][4151] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" iface="eth0" netns="/var/run/netns/cni-9044602a-de36-232f-4f8a-8f9299a47d07" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.552 [INFO][4151] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" iface="eth0" netns="/var/run/netns/cni-9044602a-de36-232f-4f8a-8f9299a47d07" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.553 [INFO][4151] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" iface="eth0" netns="/var/run/netns/cni-9044602a-de36-232f-4f8a-8f9299a47d07" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.553 [INFO][4151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.553 [INFO][4151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.579 [INFO][4168] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.580 [INFO][4168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.580 [INFO][4168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.585 [WARNING][4168] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.585 [INFO][4168] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.586 [INFO][4168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:01.589452 env[1316]: 2025-10-31 01:15:01.587 [INFO][4151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:01.592069 systemd[1]: run-netns-cni\x2d9044602a\x2dde36\x2d232f\x2d4f8a\x2d8f9299a47d07.mount: Deactivated successfully. Oct 31 01:15:01.594232 env[1316]: time="2025-10-31T01:15:01.594198007Z" level=info msg="TearDown network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" successfully" Oct 31 01:15:01.594346 env[1316]: time="2025-10-31T01:15:01.594326220Z" level=info msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" returns successfully" Oct 31 01:15:01.595845 kubelet[2122]: E1031 01:15:01.595198 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:01.596242 env[1316]: time="2025-10-31T01:15:01.596111353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kx9d2,Uid:aa45bbe1-c342-47d9-b9fb-8fc8197ae119,Namespace:kube-system,Attempt:1,}" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.550 [INFO][4150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.551 [INFO][4150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" iface="eth0" netns="/var/run/netns/cni-f05f7551-8c16-b41c-dc4f-d8988292bfec" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.551 [INFO][4150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" iface="eth0" netns="/var/run/netns/cni-f05f7551-8c16-b41c-dc4f-d8988292bfec" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.551 [INFO][4150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" iface="eth0" netns="/var/run/netns/cni-f05f7551-8c16-b41c-dc4f-d8988292bfec" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.551 [INFO][4150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.551 [INFO][4150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.582 [INFO][4166] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.582 [INFO][4166] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.586 [INFO][4166] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.592 [WARNING][4166] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.592 [INFO][4166] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.594 [INFO][4166] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:01.603302 env[1316]: 2025-10-31 01:15:01.598 [INFO][4150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:01.603797 env[1316]: time="2025-10-31T01:15:01.603418268Z" level=info msg="TearDown network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" successfully" Oct 31 01:15:01.603797 env[1316]: time="2025-10-31T01:15:01.603437354Z" level=info msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" returns successfully" Oct 31 01:15:01.605698 systemd[1]: run-netns-cni\x2df05f7551\x2d8c16\x2db41c\x2ddc4f\x2dd8988292bfec.mount: Deactivated successfully. Oct 31 01:15:01.606924 env[1316]: time="2025-10-31T01:15:01.606897882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wj6mp,Uid:50cdc712-db7a-41da-8129-57ca3765d884,Namespace:calico-system,Attempt:1,}" Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit: BPF prog-id=30 op=LOAD Oct 31 01:15:01.608000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffa2e8d1e0 a2=94 a3=1 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.608000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.608000 audit: BPF prog-id=30 op=UNLOAD Oct 31 01:15:01.608000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.608000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffa2e8d2b0 a2=50 a3=7fffa2e8d390 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.608000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d1f0 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa2e8d220 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa2e8d130 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d240 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d220 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d210 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d240 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa2e8d220 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa2e8d240 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffa2e8d210 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.616000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.616000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffa2e8d280 a2=28 a3=0 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.616000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffa2e8d030 a2=50 a3=1 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit: BPF prog-id=31 op=LOAD Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffa2e8d030 a2=94 a3=5 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit: BPF prog-id=31 op=UNLOAD Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffa2e8d0e0 a2=50 a3=1 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffa2e8d200 a2=4 a3=38 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { confidentiality } for pid=4126 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa2e8d250 a2=94 a3=6 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { confidentiality } for pid=4126 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa2e8ca00 a2=94 a3=88 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { perfmon } for pid=4126 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.617000 audit[4126]: AVC avc: denied { confidentiality } for pid=4126 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 01:15:01.617000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffa2e8ca00 a2=94 a3=88 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.617000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.618000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.618000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa2e8e430 a2=10 a3=208 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.618000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.618000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.618000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa2e8e2d0 a2=10 a3=3 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.618000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.618000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.618000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa2e8e270 a2=10 a3=3 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.618000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.618000 audit[4126]: AVC avc: denied { bpf } for pid=4126 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 01:15:01.618000 audit[4126]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffa2e8e270 a2=10 a3=7 items=0 ppid=4034 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.618000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 01:15:01.625000 audit: BPF prog-id=26 op=UNLOAD Oct 31 01:15:01.738529 kubelet[2122]: E1031 01:15:01.737267 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:01.738529 kubelet[2122]: E1031 01:15:01.737805 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:01.742009 kubelet[2122]: E1031 01:15:01.741951 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:01.744000 audit[4251]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4251 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.744000 audit[4251]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcede822e0 a2=0 a3=7ffcede822cc items=0 ppid=4034 pid=4251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.744000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.755000 audit[4249]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4249 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.755000 audit[4249]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff837fb090 a2=0 a3=7fff837fb07c items=0 ppid=4034 pid=4249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.755000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.766765 kubelet[2122]: I1031 01:15:01.766691 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p8xhs" podStartSLOduration=42.766662447 podStartE2EDuration="42.766662447s" podCreationTimestamp="2025-10-31 01:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:15:01.755599183 +0000 UTC m=+49.373052966" watchObservedRunningTime="2025-10-31 01:15:01.766662447 +0000 UTC m=+49.384116210" Oct 31 01:15:01.765000 audit[4250]: NETFILTER_CFG table=raw:107 family=2 entries=21 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.765000 audit[4250]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd47becc10 a2=0 a3=7ffd47becbfc items=0 ppid=4034 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.765000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.768000 audit[4248]: NETFILTER_CFG table=filter:108 family=2 entries=170 op=nft_register_chain pid=4248 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.768000 audit[4248]: SYSCALL arch=c000003e syscall=46 success=yes exit=98076 a0=3 a1=7ffd82e05e10 a2=0 a3=56301d1e7000 items=0 ppid=4034 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.768000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.800392 systemd-networkd[1085]: cali1fb00410b54: Link UP Oct 31 01:15:01.804053 systemd-networkd[1085]: cali1fb00410b54: Gained carrier Oct 31 01:15:01.804626 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1fb00410b54: link becomes ready Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.676 [INFO][4186] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--wj6mp-eth0 goldmane-666569f655- calico-system 50cdc712-db7a-41da-8129-57ca3765d884 1040 0 2025-10-31 01:14:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-wj6mp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1fb00410b54 [] [] }} ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.676 [INFO][4186] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.718 [INFO][4225] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" HandleID="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.718 [INFO][4225] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" HandleID="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001396c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-wj6mp", "timestamp":"2025-10-31 01:15:01.718371581 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.718 [INFO][4225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.718 [INFO][4225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.718 [INFO][4225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.729 [INFO][4225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.734 [INFO][4225] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.747 [INFO][4225] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.759 [INFO][4225] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.763 [INFO][4225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.763 [INFO][4225] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.767 [INFO][4225] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608 Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.773 [INFO][4225] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.788 [INFO][4225] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.788 [INFO][4225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" host="localhost" Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.788 [INFO][4225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:01.818459 env[1316]: 2025-10-31 01:15:01.788 [INFO][4225] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" HandleID="k8s-pod-network.82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.791 [INFO][4186] cni-plugin/k8s.go 418: Populated endpoint ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wj6mp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50cdc712-db7a-41da-8129-57ca3765d884", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-wj6mp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fb00410b54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.791 [INFO][4186] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.791 [INFO][4186] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fb00410b54 ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.801 [INFO][4186] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.804 [INFO][4186] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wj6mp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50cdc712-db7a-41da-8129-57ca3765d884", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608", Pod:"goldmane-666569f655-wj6mp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fb00410b54", MAC:"72:7d:47:26:fc:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:01.819078 env[1316]: 2025-10-31 01:15:01.814 [INFO][4186] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608" Namespace="calico-system" Pod="goldmane-666569f655-wj6mp" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:01.820000 audit[4268]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:01.820000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc1fdb86e0 a2=0 a3=7ffc1fdb86cc items=0 ppid=2275 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.820000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:01.826000 audit[4268]: NETFILTER_CFG table=nat:110 family=2 entries=35 op=nft_register_chain pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:01.826000 audit[4268]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc1fdb86e0 a2=0 a3=7ffc1fdb86cc items=0 ppid=2275 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:01.848712 env[1316]: time="2025-10-31T01:15:01.848525597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:01.848712 env[1316]: time="2025-10-31T01:15:01.848633872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:01.848712 env[1316]: time="2025-10-31T01:15:01.848657998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:01.848889 env[1316]: time="2025-10-31T01:15:01.848796942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608 pid=4283 runtime=io.containerd.runc.v2 Oct 31 01:15:01.852000 audit[4289]: NETFILTER_CFG table=filter:111 family=2 entries=52 op=nft_register_chain pid=4289 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.852000 audit[4289]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffc8f19d500 a2=0 a3=7ffc8f19d4ec items=0 ppid=4034 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.852000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.885149 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:01.888765 systemd-networkd[1085]: cali2c2d884b202: Link UP Oct 31 01:15:01.891916 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2c2d884b202: link becomes ready Oct 31 01:15:01.891677 systemd-networkd[1085]: cali2c2d884b202: Gained carrier Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.690 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0 coredns-668d6bf9bc- kube-system aa45bbe1-c342-47d9-b9fb-8fc8197ae119 1041 0 2025-10-31 01:14:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kx9d2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c2d884b202 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.690 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.739 [INFO][4233] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" HandleID="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.739 [INFO][4233] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" HandleID="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f6a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kx9d2", "timestamp":"2025-10-31 01:15:01.739123553 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.739 [INFO][4233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.789 [INFO][4233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.789 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.826 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.843 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.855 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.857 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.859 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.859 [INFO][4233] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.860 [INFO][4233] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.864 [INFO][4233] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.871 [INFO][4233] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.871 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" host="localhost" Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.871 [INFO][4233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:01.916367 env[1316]: 2025-10-31 01:15:01.871 [INFO][4233] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" HandleID="k8s-pod-network.5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.884 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aa45bbe1-c342-47d9-b9fb-8fc8197ae119", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kx9d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c2d884b202", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.884 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.884 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c2d884b202 ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.896 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.896 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aa45bbe1-c342-47d9-b9fb-8fc8197ae119", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b", Pod:"coredns-668d6bf9bc-kx9d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c2d884b202", MAC:"12:70:f5:a2:1f:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:01.917078 env[1316]: 2025-10-31 01:15:01.912 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-kx9d2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:01.920325 env[1316]: time="2025-10-31T01:15:01.920267460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wj6mp,Uid:50cdc712-db7a-41da-8129-57ca3765d884,Namespace:calico-system,Attempt:1,} returns sandbox id \"82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608\"" Oct 31 01:15:01.923511 env[1316]: time="2025-10-31T01:15:01.923469407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:15:01.926000 audit[4333]: NETFILTER_CFG table=filter:112 family=2 entries=44 op=nft_register_chain pid=4333 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:01.926000 audit[4333]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffc52c5f190 a2=0 a3=7ffc52c5f17c items=0 ppid=4034 pid=4333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:01.926000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:01.935084 env[1316]: time="2025-10-31T01:15:01.935002721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:01.935084 env[1316]: time="2025-10-31T01:15:01.935048338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:01.935084 env[1316]: time="2025-10-31T01:15:01.935058197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:01.935285 env[1316]: time="2025-10-31T01:15:01.935239339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b pid=4340 runtime=io.containerd.runc.v2 Oct 31 01:15:01.958537 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:01.981095 env[1316]: time="2025-10-31T01:15:01.981038744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kx9d2,Uid:aa45bbe1-c342-47d9-b9fb-8fc8197ae119,Namespace:kube-system,Attempt:1,} returns sandbox id \"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b\"" Oct 31 01:15:01.981871 kubelet[2122]: E1031 01:15:01.981833 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:01.984330 env[1316]: time="2025-10-31T01:15:01.984271981Z" level=info msg="CreateContainer within sandbox \"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:15:02.000082 env[1316]: time="2025-10-31T01:15:02.000020012Z" level=info msg="CreateContainer within sandbox \"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9eb4f1d0ec64bbfac1d5d0f9b9df68bf1b6cf396a4ee77e03b3ab707937a8cf\"" Oct 31 01:15:02.000678 env[1316]: time="2025-10-31T01:15:02.000642992Z" level=info msg="StartContainer for \"f9eb4f1d0ec64bbfac1d5d0f9b9df68bf1b6cf396a4ee77e03b3ab707937a8cf\"" Oct 31 01:15:02.043491 env[1316]: time="2025-10-31T01:15:02.043425886Z" level=info msg="StartContainer for \"f9eb4f1d0ec64bbfac1d5d0f9b9df68bf1b6cf396a4ee77e03b3ab707937a8cf\" returns successfully" Oct 31 01:15:02.289184 kernel: kauditd_printk_skb: 547 callbacks suppressed Oct 31 01:15:02.289365 kernel: audit: type=1130 audit(1761873302.277:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:02.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:02.278562 systemd[1]: Started sshd@9-10.0.0.95:22-10.0.0.1:43422.service. Oct 31 01:15:02.305746 systemd-networkd[1085]: cali7dc4eb3f79c: Gained IPv6LL Oct 31 01:15:02.312437 env[1316]: time="2025-10-31T01:15:02.312378839Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:02.313690 env[1316]: time="2025-10-31T01:15:02.313636371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:15:02.313936 kubelet[2122]: E1031 01:15:02.313883 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:02.314038 kubelet[2122]: E1031 01:15:02.313954 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:02.314183 kubelet[2122]: E1031 01:15:02.314106 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wj6mp_calico-system(50cdc712-db7a-41da-8129-57ca3765d884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:02.315347 kubelet[2122]: E1031 01:15:02.315281 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:02.314000 audit[4412]: USER_ACCT pid=4412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.316280 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:02.316678 kernel: audit: type=1101 audit(1761873302.314:429): pid=4412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.315000 audit[4412]: CRED_ACQ pid=4412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.317638 kernel: audit: type=1103 audit(1761873302.315:430): pid=4412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.317692 kernel: audit: type=1006 audit(1761873302.315:431): pid=4412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 31 01:15:02.317714 kernel: audit: type=1300 audit(1761873302.315:431): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0d8a20a0 a2=3 a3=0 items=0 ppid=1 pid=4412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:02.317738 kernel: audit: type=1327 audit(1761873302.315:431): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:02.315000 audit[4412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0d8a20a0 a2=3 a3=0 items=0 ppid=1 pid=4412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:02.315000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:02.318114 sshd[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:02.323351 systemd-logind[1300]: New session 10 of user core. Oct 31 01:15:02.324398 systemd[1]: Started session-10.scope. Oct 31 01:15:02.328000 audit[4412]: USER_START pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.328000 audit[4415]: CRED_ACQ pid=4415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.365690 kernel: audit: type=1105 audit(1761873302.328:432): pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.365737 kernel: audit: type=1103 audit(1761873302.328:433): pid=4415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.487305 env[1316]: time="2025-10-31T01:15:02.487245060Z" level=info msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" Oct 31 01:15:02.487495 env[1316]: time="2025-10-31T01:15:02.487266741Z" level=info msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" Oct 31 01:15:02.561784 systemd-networkd[1085]: vxlan.calico: Gained IPv6LL Oct 31 01:15:02.640753 sshd[4412]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:02.640000 audit[4412]: USER_END pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.643380 systemd[1]: sshd@9-10.0.0.95:22-10.0.0.1:43422.service: Deactivated successfully. Oct 31 01:15:02.644663 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 01:15:02.644724 systemd-logind[1300]: Session 10 logged out. Waiting for processes to exit. Oct 31 01:15:02.645764 systemd-logind[1300]: Removed session 10. Oct 31 01:15:02.640000 audit[4412]: CRED_DISP pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.656948 kernel: audit: type=1106 audit(1761873302.640:434): pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.657085 kernel: audit: type=1104 audit(1761873302.640:435): pid=4412 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:02.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.95:22-10.0.0.1:43422 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:02.742639 kubelet[2122]: E1031 01:15:02.742576 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:02.747572 kubelet[2122]: E1031 01:15:02.746884 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:02.747572 kubelet[2122]: E1031 01:15:02.747125 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" iface="eth0" netns="/var/run/netns/cni-72e49e2b-15c3-1974-a3fe-b623e1e85815" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" iface="eth0" netns="/var/run/netns/cni-72e49e2b-15c3-1974-a3fe-b623e1e85815" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" iface="eth0" netns="/var/run/netns/cni-72e49e2b-15c3-1974-a3fe-b623e1e85815" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.668 [INFO][4448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.729 [INFO][4468] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.730 [INFO][4468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.730 [INFO][4468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.739 [WARNING][4468] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.739 [INFO][4468] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.740 [INFO][4468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:02.755595 env[1316]: 2025-10-31 01:15:02.752 [INFO][4448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:02.763087 systemd[1]: run-netns-cni\x2d72e49e2b\x2d15c3\x2d1974\x2da3fe\x2db623e1e85815.mount: Deactivated successfully. Oct 31 01:15:02.766165 env[1316]: time="2025-10-31T01:15:02.766131051Z" level=info msg="TearDown network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" successfully" Oct 31 01:15:02.766270 env[1316]: time="2025-10-31T01:15:02.766250326Z" level=info msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" returns successfully" Oct 31 01:15:02.767215 env[1316]: time="2025-10-31T01:15:02.767193955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd8js,Uid:bd0bddee-8a85-4f55-a28b-a795608cb1fb,Namespace:calico-system,Attempt:1,}" Oct 31 01:15:02.769101 kubelet[2122]: I1031 01:15:02.768544 2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kx9d2" podStartSLOduration=42.768518053 podStartE2EDuration="42.768518053s" podCreationTimestamp="2025-10-31 01:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:15:02.767593421 +0000 UTC m=+50.385047184" watchObservedRunningTime="2025-10-31 01:15:02.768518053 +0000 UTC m=+50.385971806" Oct 31 01:15:02.771000 audit[4484]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4484 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:02.771000 audit[4484]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec852ef00 a2=0 a3=7ffec852eeec items=0 ppid=2275 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:02.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" iface="eth0" netns="/var/run/netns/cni-ca784de7-217d-6c62-470e-2dd9b00690b1" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" iface="eth0" netns="/var/run/netns/cni-ca784de7-217d-6c62-470e-2dd9b00690b1" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" iface="eth0" netns="/var/run/netns/cni-ca784de7-217d-6c62-470e-2dd9b00690b1" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.682 [INFO][4447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.747 [INFO][4473] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.747 [INFO][4473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.747 [INFO][4473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.753 [WARNING][4473] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.753 [INFO][4473] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.763 [INFO][4473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:02.776708 env[1316]: 2025-10-31 01:15:02.774 [INFO][4447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:02.777321 env[1316]: time="2025-10-31T01:15:02.776929910Z" level=info msg="TearDown network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" successfully" Oct 31 01:15:02.777321 env[1316]: time="2025-10-31T01:15:02.776969584Z" level=info msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" returns successfully" Oct 31 01:15:02.779679 systemd[1]: run-netns-cni\x2dca784de7\x2d217d\x2d6c62\x2d470e\x2d2dd9b00690b1.mount: Deactivated successfully. Oct 31 01:15:02.779000 audit[4484]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4484 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:02.779000 audit[4484]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffec852ef00 a2=0 a3=7ffec852eeec items=0 ppid=2275 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:02.779000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:02.784631 env[1316]: time="2025-10-31T01:15:02.784571648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85445fc7bc-269qr,Uid:cbcd2bd9-2395-4730-b047-aac75539fb47,Namespace:calico-system,Attempt:1,}" Oct 31 01:15:02.906623 systemd-networkd[1085]: calia8bea715de0: Link UP Oct 31 01:15:02.912660 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 01:15:02.913313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia8bea715de0: link becomes ready Oct 31 01:15:02.912864 systemd-networkd[1085]: calia8bea715de0: Gained carrier Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.841 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0 calico-kube-controllers-85445fc7bc- calico-system cbcd2bd9-2395-4730-b047-aac75539fb47 1078 0 2025-10-31 01:14:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85445fc7bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85445fc7bc-269qr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia8bea715de0 [] [] }} ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.842 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.867 [INFO][4523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" HandleID="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.867 [INFO][4523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" HandleID="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85445fc7bc-269qr", "timestamp":"2025-10-31 01:15:02.867275343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.867 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.867 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.867 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.872 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.876 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.885 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.886 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.888 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.888 [INFO][4523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.889 [INFO][4523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1 Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.893 [INFO][4523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.898 [INFO][4523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.898 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" host="localhost" Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.899 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:02.925785 env[1316]: 2025-10-31 01:15:02.899 [INFO][4523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" HandleID="k8s-pod-network.a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.903 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0", GenerateName:"calico-kube-controllers-85445fc7bc-", Namespace:"calico-system", SelfLink:"", UID:"cbcd2bd9-2395-4730-b047-aac75539fb47", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85445fc7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85445fc7bc-269qr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8bea715de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.903 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.903 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8bea715de0 ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.913 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.914 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0", GenerateName:"calico-kube-controllers-85445fc7bc-", Namespace:"calico-system", SelfLink:"", UID:"cbcd2bd9-2395-4730-b047-aac75539fb47", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85445fc7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1", Pod:"calico-kube-controllers-85445fc7bc-269qr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8bea715de0", MAC:"5e:2d:70:25:39:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:02.926485 env[1316]: 2025-10-31 01:15:02.923 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1" Namespace="calico-system" Pod="calico-kube-controllers-85445fc7bc-269qr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:02.935000 audit[4540]: NETFILTER_CFG table=filter:115 family=2 entries=58 op=nft_register_chain pid=4540 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:02.935000 audit[4540]: SYSCALL arch=c000003e syscall=46 success=yes exit=27180 a0=3 a1=7ffd4e4a4320 a2=0 a3=7ffd4e4a430c items=0 ppid=4034 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:02.935000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:02.941878 env[1316]: time="2025-10-31T01:15:02.941787910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:02.941955 env[1316]: time="2025-10-31T01:15:02.941886707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:02.941955 env[1316]: time="2025-10-31T01:15:02.941914931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:02.942453 env[1316]: time="2025-10-31T01:15:02.942223115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1 pid=4548 runtime=io.containerd.runc.v2 Oct 31 01:15:02.967095 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:02.999632 env[1316]: time="2025-10-31T01:15:02.999552628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85445fc7bc-269qr,Uid:cbcd2bd9-2395-4730-b047-aac75539fb47,Namespace:calico-system,Attempt:1,} returns sandbox id \"a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1\"" Oct 31 01:15:03.001861 env[1316]: time="2025-10-31T01:15:03.001772293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:15:03.014643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie61c2935cb0: link becomes ready Oct 31 01:15:03.022158 systemd-networkd[1085]: cali1fb00410b54: Gained IPv6LL Oct 31 01:15:03.028971 systemd-networkd[1085]: calie61c2935cb0: Link UP Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.835 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--fd8js-eth0 csi-node-driver- calico-system bd0bddee-8a85-4f55-a28b-a795608cb1fb 1077 0 2025-10-31 01:14:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-fd8js eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie61c2935cb0 [] [] }} ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.835 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.867 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" HandleID="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.868 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" HandleID="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e76f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-fd8js", "timestamp":"2025-10-31 01:15:02.867715427 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.868 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.898 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.898 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.974 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.979 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.986 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.991 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.993 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.993 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:02.996 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6 Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:03.002 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:03.009 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:03.009 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" host="localhost" Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:03.009 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:03.029634 env[1316]: 2025-10-31 01:15:03.010 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" HandleID="k8s-pod-network.aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.012 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fd8js-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd0bddee-8a85-4f55-a28b-a795608cb1fb", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-fd8js", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie61c2935cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.012 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.012 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie61c2935cb0 ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.014 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.015 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fd8js-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd0bddee-8a85-4f55-a28b-a795608cb1fb", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6", Pod:"csi-node-driver-fd8js", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie61c2935cb0", MAC:"72:ee:a1:7a:fe:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:03.030289 env[1316]: 2025-10-31 01:15:03.025 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6" Namespace="calico-system" Pod="csi-node-driver-fd8js" WorkloadEndpoint="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:03.031478 systemd-networkd[1085]: calie61c2935cb0: Gained carrier Oct 31 01:15:03.043998 env[1316]: time="2025-10-31T01:15:03.043917115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:03.043998 env[1316]: time="2025-10-31T01:15:03.043963212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:03.044237 env[1316]: time="2025-10-31T01:15:03.043977579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:03.044391 env[1316]: time="2025-10-31T01:15:03.044307394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6 pid=4597 runtime=io.containerd.runc.v2 Oct 31 01:15:03.043000 audit[4599]: NETFILTER_CFG table=filter:116 family=2 entries=58 op=nft_register_chain pid=4599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:03.043000 audit[4599]: SYSCALL arch=c000003e syscall=46 success=yes exit=27164 a0=3 a1=7ffdc76b6880 a2=0 a3=7ffdc76b686c items=0 ppid=4034 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:03.043000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:03.069910 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:03.080895 env[1316]: time="2025-10-31T01:15:03.080849092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fd8js,Uid:bd0bddee-8a85-4f55-a28b-a795608cb1fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6\"" Oct 31 01:15:03.137776 systemd-networkd[1085]: cali2c2d884b202: Gained IPv6LL Oct 31 01:15:03.333269 env[1316]: time="2025-10-31T01:15:03.333196605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:03.334519 env[1316]: time="2025-10-31T01:15:03.334470388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:15:03.334802 kubelet[2122]: E1031 01:15:03.334759 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:03.334896 kubelet[2122]: E1031 01:15:03.334816 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:03.335231 kubelet[2122]: E1031 01:15:03.335132 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85445fc7bc-269qr_calico-system(cbcd2bd9-2395-4730-b047-aac75539fb47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:03.335525 env[1316]: time="2025-10-31T01:15:03.335193968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:15:03.336403 kubelet[2122]: E1031 01:15:03.336347 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:03.486657 env[1316]: time="2025-10-31T01:15:03.486578184Z" level=info msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.526 [INFO][4643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.526 [INFO][4643] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" iface="eth0" netns="/var/run/netns/cni-ac30d76f-70c7-1dd6-7a53-911ad1211f71" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.526 [INFO][4643] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" iface="eth0" netns="/var/run/netns/cni-ac30d76f-70c7-1dd6-7a53-911ad1211f71" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.527 [INFO][4643] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" iface="eth0" netns="/var/run/netns/cni-ac30d76f-70c7-1dd6-7a53-911ad1211f71" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.527 [INFO][4643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.527 [INFO][4643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.545 [INFO][4652] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.545 [INFO][4652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.545 [INFO][4652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.556 [WARNING][4652] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.556 [INFO][4652] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.558 [INFO][4652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:03.562305 env[1316]: 2025-10-31 01:15:03.560 [INFO][4643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:03.562923 env[1316]: time="2025-10-31T01:15:03.562460071Z" level=info msg="TearDown network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" successfully" Oct 31 01:15:03.562923 env[1316]: time="2025-10-31T01:15:03.562492462Z" level=info msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" returns successfully" Oct 31 01:15:03.563215 env[1316]: time="2025-10-31T01:15:03.563180405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-lcpfh,Uid:7cb997cc-c908-4ddb-9523-a2aea9785811,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:15:03.567990 systemd[1]: run-netns-cni\x2dac30d76f\x2d70c7\x2d1dd6\x2d7a53\x2d911ad1211f71.mount: Deactivated successfully. Oct 31 01:15:03.683686 systemd-networkd[1085]: calibafd3824a53: Link UP Oct 31 01:15:03.686546 systemd-networkd[1085]: calibafd3824a53: Gained carrier Oct 31 01:15:03.686668 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibafd3824a53: link becomes ready Oct 31 01:15:03.695905 env[1316]: time="2025-10-31T01:15:03.695849279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.613 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0 calico-apiserver-86687d576- calico-apiserver 7cb997cc-c908-4ddb-9523-a2aea9785811 1104 0 2025-10-31 01:14:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86687d576 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-86687d576-lcpfh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibafd3824a53 [] [] }} ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.613 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.645 [INFO][4676] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" HandleID="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.645 [INFO][4676] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" HandleID="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-86687d576-lcpfh", "timestamp":"2025-10-31 01:15:03.645114105 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.645 [INFO][4676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.645 [INFO][4676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.645 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.650 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.654 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.657 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.659 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.661 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.661 [INFO][4676] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.662 [INFO][4676] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5 Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.668 [INFO][4676] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.675 [INFO][4676] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.675 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" host="localhost" Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.675 [INFO][4676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:03.697808 env[1316]: 2025-10-31 01:15:03.675 [INFO][4676] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" HandleID="k8s-pod-network.91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.680 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cb997cc-c908-4ddb-9523-a2aea9785811", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-86687d576-lcpfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibafd3824a53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.680 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.681 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibafd3824a53 ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.687 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.687 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cb997cc-c908-4ddb-9523-a2aea9785811", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5", Pod:"calico-apiserver-86687d576-lcpfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibafd3824a53", MAC:"b6:ad:7a:67:b8:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:03.698377 env[1316]: 2025-10-31 01:15:03.695 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5" Namespace="calico-apiserver" Pod="calico-apiserver-86687d576-lcpfh" WorkloadEndpoint="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:03.700736 env[1316]: time="2025-10-31T01:15:03.700577787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:15:03.700954 kubelet[2122]: E1031 01:15:03.700916 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:03.701010 kubelet[2122]: E1031 01:15:03.700969 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:03.701150 kubelet[2122]: E1031 01:15:03.701105 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:03.703519 env[1316]: time="2025-10-31T01:15:03.703476707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:15:03.709647 env[1316]: time="2025-10-31T01:15:03.709139363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:15:03.709647 env[1316]: time="2025-10-31T01:15:03.709200560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:15:03.709647 env[1316]: time="2025-10-31T01:15:03.709210819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:15:03.709647 env[1316]: time="2025-10-31T01:15:03.709419565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5 pid=4698 runtime=io.containerd.runc.v2 Oct 31 01:15:03.710000 audit[4703]: NETFILTER_CFG table=filter:117 family=2 entries=53 op=nft_register_chain pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 01:15:03.710000 audit[4703]: SYSCALL arch=c000003e syscall=46 success=yes exit=26608 a0=3 a1=7ffedb1d7530 a2=0 a3=7ffedb1d751c items=0 ppid=4034 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:03.710000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 01:15:03.741759 systemd-resolved[1231]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 01:15:03.750890 kubelet[2122]: E1031 01:15:03.750605 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:03.756410 kubelet[2122]: E1031 01:15:03.756379 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:03.756725 kubelet[2122]: E1031 01:15:03.756623 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:03.761392 kubelet[2122]: E1031 01:15:03.759800 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:03.778835 env[1316]: time="2025-10-31T01:15:03.778787152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86687d576-lcpfh,Uid:7cb997cc-c908-4ddb-9523-a2aea9785811,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5\"" Oct 31 01:15:03.793000 audit[4734]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:03.793000 audit[4734]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd304f9830 a2=0 a3=7ffd304f981c items=0 ppid=2275 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:03.793000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:03.804000 audit[4734]: NETFILTER_CFG table=nat:119 family=2 entries=56 op=nft_register_chain pid=4734 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:03.804000 audit[4734]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd304f9830 a2=0 a3=7ffd304f981c items=0 ppid=2275 pid=4734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:03.804000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:04.072493 env[1316]: time="2025-10-31T01:15:04.072345576Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:04.136418 env[1316]: time="2025-10-31T01:15:04.136334998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:15:04.136675 kubelet[2122]: E1031 01:15:04.136631 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:04.136734 kubelet[2122]: E1031 01:15:04.136692 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:04.136976 kubelet[2122]: E1031 01:15:04.136919 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:04.137127 env[1316]: time="2025-10-31T01:15:04.137010236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:04.139201 kubelet[2122]: E1031 01:15:04.139160 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:15:04.161779 systemd-networkd[1085]: calie61c2935cb0: Gained IPv6LL Oct 31 01:15:04.225823 systemd-networkd[1085]: calia8bea715de0: Gained IPv6LL Oct 31 01:15:04.540690 env[1316]: time="2025-10-31T01:15:04.540592260Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:04.612917 env[1316]: time="2025-10-31T01:15:04.612844631Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:04.613164 kubelet[2122]: E1031 01:15:04.613114 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:04.613216 kubelet[2122]: E1031 01:15:04.613175 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:04.613380 kubelet[2122]: E1031 01:15:04.613327 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-lcpfh_calico-apiserver(7cb997cc-c908-4ddb-9523-a2aea9785811): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:04.615237 kubelet[2122]: E1031 01:15:04.615202 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:04.759921 kubelet[2122]: E1031 01:15:04.759880 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:04.760577 kubelet[2122]: E1031 01:15:04.760542 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:04.760876 kubelet[2122]: E1031 01:15:04.760838 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:15:04.761015 kubelet[2122]: E1031 01:15:04.760985 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:04.993795 systemd-networkd[1085]: calibafd3824a53: Gained IPv6LL Oct 31 01:15:05.036000 audit[4737]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:05.036000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffea0d08a00 a2=0 a3=7ffea0d089ec items=0 ppid=2275 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:05.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:05.046000 audit[4737]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:05.046000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea0d08a00 a2=0 a3=7ffea0d089ec items=0 ppid=2275 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:05.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:05.762508 kubelet[2122]: E1031 01:15:05.762429 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:07.643759 systemd[1]: Started sshd@10-10.0.0.95:22-10.0.0.1:43430.service. Oct 31 01:15:07.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:43430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:07.645473 kernel: kauditd_printk_skb: 28 callbacks suppressed Oct 31 01:15:07.645537 kernel: audit: type=1130 audit(1761873307.642:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:43430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:07.676000 audit[4744]: USER_ACCT pid=4744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.678927 sshd[4744]: Accepted publickey for core from 10.0.0.1 port 43430 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:07.680827 sshd[4744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:07.678000 audit[4744]: CRED_ACQ pid=4744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.685870 systemd-logind[1300]: New session 11 of user core. Oct 31 01:15:07.685982 systemd[1]: Started session-11.scope. Oct 31 01:15:07.693067 kernel: audit: type=1101 audit(1761873307.676:447): pid=4744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.693138 kernel: audit: type=1103 audit(1761873307.678:448): pid=4744 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.697617 kernel: audit: type=1006 audit(1761873307.678:449): pid=4744 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Oct 31 01:15:07.678000 audit[4744]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe352e6f30 a2=3 a3=0 items=0 ppid=1 pid=4744 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:07.678000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:07.706806 kernel: audit: type=1300 audit(1761873307.678:449): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe352e6f30 a2=3 a3=0 items=0 ppid=1 pid=4744 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:07.706889 kernel: audit: type=1327 audit(1761873307.678:449): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:07.706906 kernel: audit: type=1105 audit(1761873307.691:450): pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.691000 audit[4744]: USER_START pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.693000 audit[4747]: CRED_ACQ pid=4747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.719695 kernel: audit: type=1103 audit(1761873307.693:451): pid=4747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.884033 sshd[4744]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:07.883000 audit[4744]: USER_END pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.887044 systemd[1]: sshd@10-10.0.0.95:22-10.0.0.1:43430.service: Deactivated successfully. Oct 31 01:15:07.888083 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 01:15:07.893510 systemd-logind[1300]: Session 11 logged out. Waiting for processes to exit. Oct 31 01:15:07.883000 audit[4744]: CRED_DISP pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.894335 systemd-logind[1300]: Removed session 11. Oct 31 01:15:07.900627 kernel: audit: type=1106 audit(1761873307.883:452): pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.900688 kernel: audit: type=1104 audit(1761873307.883:453): pid=4744 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:07.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.95:22-10.0.0.1:43430 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:12.462704 env[1316]: time="2025-10-31T01:15:12.462580497Z" level=info msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" Oct 31 01:15:12.494688 env[1316]: time="2025-10-31T01:15:12.494581567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.491 [WARNING][4777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wj6mp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50cdc712-db7a-41da-8129-57ca3765d884", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608", Pod:"goldmane-666569f655-wj6mp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fb00410b54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.491 [INFO][4777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.491 [INFO][4777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" iface="eth0" netns="" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.491 [INFO][4777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.491 [INFO][4777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.516 [INFO][4788] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.516 [INFO][4788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.516 [INFO][4788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.664 [WARNING][4788] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.664 [INFO][4788] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.817 [INFO][4788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:12.821716 env[1316]: 2025-10-31 01:15:12.820 [INFO][4777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:12.822382 env[1316]: time="2025-10-31T01:15:12.822315860Z" level=info msg="TearDown network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" successfully" Oct 31 01:15:12.822382 env[1316]: time="2025-10-31T01:15:12.822356226Z" level=info msg="StopPodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" returns successfully" Oct 31 01:15:12.822983 env[1316]: time="2025-10-31T01:15:12.822958345Z" level=info msg="RemovePodSandbox for \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" Oct 31 01:15:12.823063 env[1316]: time="2025-10-31T01:15:12.822991518Z" level=info msg="Forcibly stopping sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\"" Oct 31 01:15:12.887878 systemd[1]: Started sshd@11-10.0.0.95:22-10.0.0.1:47344.service. Oct 31 01:15:12.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.95:22-10.0.0.1:47344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:12.889820 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:12.889906 kernel: audit: type=1130 audit(1761873312.886:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.95:22-10.0.0.1:47344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:12.919000 audit[4820]: USER_ACCT pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.921370 sshd[4820]: Accepted publickey for core from 10.0.0.1 port 47344 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:12.926000 audit[4820]: CRED_ACQ pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.928111 sshd[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:12.932498 systemd-logind[1300]: New session 12 of user core. Oct 31 01:15:12.933151 systemd[1]: Started session-12.scope. Oct 31 01:15:12.933893 kernel: audit: type=1101 audit(1761873312.919:456): pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.934060 kernel: audit: type=1103 audit(1761873312.926:457): pid=4820 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.939683 kernel: audit: type=1006 audit(1761873312.926:458): pid=4820 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Oct 31 01:15:12.939791 kernel: audit: type=1300 audit(1761873312.926:458): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d013170 a2=3 a3=0 items=0 ppid=1 pid=4820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:12.926000 audit[4820]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d013170 a2=3 a3=0 items=0 ppid=1 pid=4820 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:12.926000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:12.947024 kernel: audit: type=1327 audit(1761873312.926:458): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:12.947087 kernel: audit: type=1105 audit(1761873312.936:459): pid=4820 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.936000 audit[4820]: USER_START pid=4820 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.937000 audit[4824]: CRED_ACQ pid=4824 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:12.959958 kernel: audit: type=1103 audit(1761873312.937:460): pid=4824 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.868 [WARNING][4806] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--wj6mp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"50cdc712-db7a-41da-8129-57ca3765d884", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82d435bae1d32287abf190b12691d538e621edf02a8c10c4acb7fa3db9fb8608", Pod:"goldmane-666569f655-wj6mp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fb00410b54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.869 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.869 [INFO][4806] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" iface="eth0" netns="" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.869 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.869 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.899 [INFO][4814] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.899 [INFO][4814] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.899 [INFO][4814] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.996 [WARNING][4814] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.996 [INFO][4814] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" HandleID="k8s-pod-network.a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Workload="localhost-k8s-goldmane--666569f655--wj6mp-eth0" Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.998 [INFO][4814] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.001052 env[1316]: 2025-10-31 01:15:12.999 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae" Oct 31 01:15:13.001588 env[1316]: time="2025-10-31T01:15:13.001081806Z" level=info msg="TearDown network for sandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" successfully" Oct 31 01:15:13.010908 env[1316]: time="2025-10-31T01:15:13.010858233Z" level=info msg="RemovePodSandbox \"a8a7d15da3bf8cbdb191a985928692fcb1e7fac6e2665ad7368d328a401536ae\" returns successfully" Oct 31 01:15:13.011629 env[1316]: time="2025-10-31T01:15:13.011570581Z" level=info msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" Oct 31 01:15:13.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.95:22-10.0.0.1:47350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.075788 systemd[1]: Started sshd@12-10.0.0.95:22-10.0.0.1:47350.service. Oct 31 01:15:13.075973 sshd[4820]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:13.078273 systemd[1]: sshd@11-10.0.0.95:22-10.0.0.1:47344.service: Deactivated successfully. Oct 31 01:15:13.079242 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 01:15:13.080724 systemd-logind[1300]: Session 12 logged out. Waiting for processes to exit. Oct 31 01:15:13.081576 systemd-logind[1300]: Removed session 12. Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.040 [WARNING][4845] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cb997cc-c908-4ddb-9523-a2aea9785811", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5", Pod:"calico-apiserver-86687d576-lcpfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibafd3824a53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.040 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.040 [INFO][4845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" iface="eth0" netns="" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.040 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.040 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.061 [INFO][4854] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.061 [INFO][4854] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.061 [INFO][4854] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.070 [WARNING][4854] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.070 [INFO][4854] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.071 [INFO][4854] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.082687 env[1316]: 2025-10-31 01:15:13.075 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.075000 audit[4820]: USER_END pid=4820 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.083202 env[1316]: time="2025-10-31T01:15:13.082738938Z" level=info msg="TearDown network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" successfully" Oct 31 01:15:13.083202 env[1316]: time="2025-10-31T01:15:13.082768595Z" level=info msg="StopPodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" returns successfully" Oct 31 01:15:13.083272 env[1316]: time="2025-10-31T01:15:13.083233694Z" level=info msg="RemovePodSandbox for \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" Oct 31 01:15:13.083298 env[1316]: time="2025-10-31T01:15:13.083257740Z" level=info msg="Forcibly stopping sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\"" Oct 31 01:15:13.090684 kernel: audit: type=1130 audit(1761873313.074:461): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.95:22-10.0.0.1:47350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.090791 kernel: audit: type=1106 audit(1761873313.075:462): pid=4820 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.075000 audit[4820]: CRED_DISP pid=4820 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.95:22-10.0.0.1:47344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.112053 env[1316]: time="2025-10-31T01:15:13.111997514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:13.113479 env[1316]: time="2025-10-31T01:15:13.113415727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:15:13.113725 kubelet[2122]: E1031 01:15:13.113667 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:15:13.112000 audit[4862]: USER_ACCT pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.114219 kubelet[2122]: E1031 01:15:13.113744 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:15:13.114219 kubelet[2122]: E1031 01:15:13.114065 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c280e58b5284c02a79bc96b4b32937d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:13.114340 sshd[4862]: Accepted publickey for core from 10.0.0.1 port 47350 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:13.113000 audit[4862]: CRED_ACQ pid=4862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.113000 audit[4862]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd772f36f0 a2=3 a3=0 items=0 ppid=1 pid=4862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:13.113000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:13.115420 sshd[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:13.117411 env[1316]: time="2025-10-31T01:15:13.117381210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:15:13.121584 systemd[1]: Started session-13.scope. Oct 31 01:15:13.122082 systemd-logind[1300]: New session 13 of user core. Oct 31 01:15:13.126000 audit[4862]: USER_START pid=4862 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.127000 audit[4887]: CRED_ACQ pid=4887 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.125 [WARNING][4876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"7cb997cc-c908-4ddb-9523-a2aea9785811", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91a23a9ddf489ee9f5518ce2e359344e57f1c12d445394ca1359fb5ccbe399e5", Pod:"calico-apiserver-86687d576-lcpfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibafd3824a53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.126 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.126 [INFO][4876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" iface="eth0" netns="" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.126 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.126 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.144 [INFO][4886] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.145 [INFO][4886] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.145 [INFO][4886] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.150 [WARNING][4886] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.151 [INFO][4886] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" HandleID="k8s-pod-network.88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Workload="localhost-k8s-calico--apiserver--86687d576--lcpfh-eth0" Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.152 [INFO][4886] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.156126 env[1316]: 2025-10-31 01:15:13.154 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b" Oct 31 01:15:13.156740 env[1316]: time="2025-10-31T01:15:13.156163093Z" level=info msg="TearDown network for sandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" successfully" Oct 31 01:15:13.159821 env[1316]: time="2025-10-31T01:15:13.159793722Z" level=info msg="RemovePodSandbox \"88cb726ffde3e673346a9633d2a436490c0a0455c8d7b6d7ee7cfdaab2d6004b\" returns successfully" Oct 31 01:15:13.160427 env[1316]: time="2025-10-31T01:15:13.160390591Z" level=info msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.190 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0", GenerateName:"calico-kube-controllers-85445fc7bc-", Namespace:"calico-system", SelfLink:"", UID:"cbcd2bd9-2395-4730-b047-aac75539fb47", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85445fc7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1", Pod:"calico-kube-controllers-85445fc7bc-269qr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8bea715de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.190 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.190 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" iface="eth0" netns="" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.190 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.190 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.207 [INFO][4919] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.208 [INFO][4919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.208 [INFO][4919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.266 [WARNING][4919] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.266 [INFO][4919] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.268 [INFO][4919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.272421 env[1316]: 2025-10-31 01:15:13.270 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.273059 env[1316]: time="2025-10-31T01:15:13.272427199Z" level=info msg="TearDown network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" successfully" Oct 31 01:15:13.273059 env[1316]: time="2025-10-31T01:15:13.272459631Z" level=info msg="StopPodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" returns successfully" Oct 31 01:15:13.273059 env[1316]: time="2025-10-31T01:15:13.272881930Z" level=info msg="RemovePodSandbox for \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" Oct 31 01:15:13.273059 env[1316]: time="2025-10-31T01:15:13.272907608Z" level=info msg="Forcibly stopping sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\"" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.309 [WARNING][4938] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0", GenerateName:"calico-kube-controllers-85445fc7bc-", Namespace:"calico-system", SelfLink:"", UID:"cbcd2bd9-2395-4730-b047-aac75539fb47", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85445fc7bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a18e7b73eb6924aa5d93ac593b397d1f58c3058ee064afb891d9a9bb63044ce1", Pod:"calico-kube-controllers-85445fc7bc-269qr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia8bea715de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.309 [INFO][4938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.309 [INFO][4938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" iface="eth0" netns="" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.309 [INFO][4938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.309 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.335 [INFO][4947] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.335 [INFO][4947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.336 [INFO][4947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.341 [WARNING][4947] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.341 [INFO][4947] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" HandleID="k8s-pod-network.679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Workload="localhost-k8s-calico--kube--controllers--85445fc7bc--269qr-eth0" Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.343 [INFO][4947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.347232 env[1316]: 2025-10-31 01:15:13.344 [INFO][4938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7" Oct 31 01:15:13.347232 env[1316]: time="2025-10-31T01:15:13.347184197Z" level=info msg="TearDown network for sandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" successfully" Oct 31 01:15:13.371516 sshd[4862]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:13.370000 audit[4862]: USER_END pid=4862 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.371000 audit[4862]: CRED_DISP pid=4862 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.95:22-10.0.0.1:47354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.374055 systemd[1]: Started sshd@13-10.0.0.95:22-10.0.0.1:47354.service. Oct 31 01:15:13.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.95:22-10.0.0.1:47350 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.374529 systemd[1]: sshd@12-10.0.0.95:22-10.0.0.1:47350.service: Deactivated successfully. Oct 31 01:15:13.375947 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 01:15:13.376472 systemd-logind[1300]: Session 13 logged out. Waiting for processes to exit. Oct 31 01:15:13.377568 systemd-logind[1300]: Removed session 13. Oct 31 01:15:13.388216 env[1316]: time="2025-10-31T01:15:13.388158218Z" level=info msg="RemovePodSandbox \"679a0f53791fd69c79c416e05f0d217c02eaabfee988b0580619de6c328519d7\" returns successfully" Oct 31 01:15:13.391900 env[1316]: time="2025-10-31T01:15:13.389991295Z" level=info msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" Oct 31 01:15:13.404000 audit[4956]: USER_ACCT pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.406278 sshd[4956]: Accepted publickey for core from 10.0.0.1 port 47354 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:13.405000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.405000 audit[4956]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc88f53d00 a2=3 a3=0 items=0 ppid=1 pid=4956 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:13.405000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:13.407406 sshd[4956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:13.411927 systemd[1]: Started session-14.scope. Oct 31 01:15:13.412384 systemd-logind[1300]: New session 14 of user core. Oct 31 01:15:13.416000 audit[4956]: USER_START pid=4956 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.417000 audit[4976]: CRED_ACQ pid=4976 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.443844 env[1316]: time="2025-10-31T01:15:13.443777838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:13.445004 env[1316]: time="2025-10-31T01:15:13.444955605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:15:13.446109 kubelet[2122]: E1031 01:15:13.445157 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:15:13.446109 kubelet[2122]: E1031 01:15:13.445227 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:15:13.446109 kubelet[2122]: E1031 01:15:13.445336 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:13.446561 kubelet[2122]: E1031 01:15:13.446462 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.436 [WARNING][4969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aa45bbe1-c342-47d9-b9fb-8fc8197ae119", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b", Pod:"coredns-668d6bf9bc-kx9d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c2d884b202", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.437 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.437 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" iface="eth0" netns="" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.437 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.437 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.458 [INFO][4981] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.458 [INFO][4981] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.458 [INFO][4981] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.531 [WARNING][4981] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.531 [INFO][4981] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.533 [INFO][4981] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.536920 env[1316]: 2025-10-31 01:15:13.535 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.537764 env[1316]: time="2025-10-31T01:15:13.536953101Z" level=info msg="TearDown network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" successfully" Oct 31 01:15:13.537764 env[1316]: time="2025-10-31T01:15:13.536988648Z" level=info msg="StopPodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" returns successfully" Oct 31 01:15:13.537764 env[1316]: time="2025-10-31T01:15:13.537548056Z" level=info msg="RemovePodSandbox for \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" Oct 31 01:15:13.537764 env[1316]: time="2025-10-31T01:15:13.537588373Z" level=info msg="Forcibly stopping sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\"" Oct 31 01:15:13.561101 sshd[4956]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:13.561000 audit[4956]: USER_END pid=4956 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.561000 audit[4956]: CRED_DISP pid=4956 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:13.564825 systemd-logind[1300]: Session 14 logged out. Waiting for processes to exit. Oct 31 01:15:13.565024 systemd[1]: sshd@13-10.0.0.95:22-10.0.0.1:47354.service: Deactivated successfully. Oct 31 01:15:13.565884 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 01:15:13.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.95:22-10.0.0.1:47354 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:13.566388 systemd-logind[1300]: Removed session 14. Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.571 [WARNING][5008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"aa45bbe1-c342-47d9-b9fb-8fc8197ae119", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c6c0988e21295cc655fa8d6807c1acbe3be85a4b86515cf01b1327a4088fe2b", Pod:"coredns-668d6bf9bc-kx9d2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c2d884b202", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.571 [INFO][5008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.571 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" iface="eth0" netns="" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.571 [INFO][5008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.571 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.589 [INFO][5018] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.589 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.589 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.596 [WARNING][5018] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.596 [INFO][5018] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" HandleID="k8s-pod-network.a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Workload="localhost-k8s-coredns--668d6bf9bc--kx9d2-eth0" Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.598 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.603106 env[1316]: 2025-10-31 01:15:13.601 [INFO][5008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72" Oct 31 01:15:13.603106 env[1316]: time="2025-10-31T01:15:13.603089348Z" level=info msg="TearDown network for sandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" successfully" Oct 31 01:15:13.611679 env[1316]: time="2025-10-31T01:15:13.611615178Z" level=info msg="RemovePodSandbox \"a8dc1d78076c25070d75912ab77bb6fc27814dbb8dff1f2793ffbaeb1045cf72\" returns successfully" Oct 31 01:15:13.612120 env[1316]: time="2025-10-31T01:15:13.612088934Z" level=info msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.641 [WARNING][5037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--r924d-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a793cf-29da-4092-aaf4-95f63c307028", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2", Pod:"calico-apiserver-86687d576-r924d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac2d4a9bacb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.642 [INFO][5037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.642 [INFO][5037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" iface="eth0" netns="" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.642 [INFO][5037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.642 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.664 [INFO][5046] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.664 [INFO][5046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.665 [INFO][5046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.671 [WARNING][5046] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.671 [INFO][5046] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.673 [INFO][5046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.676705 env[1316]: 2025-10-31 01:15:13.675 [INFO][5037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.677176 env[1316]: time="2025-10-31T01:15:13.676732797Z" level=info msg="TearDown network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" successfully" Oct 31 01:15:13.677176 env[1316]: time="2025-10-31T01:15:13.676764888Z" level=info msg="StopPodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" returns successfully" Oct 31 01:15:13.677452 env[1316]: time="2025-10-31T01:15:13.677409348Z" level=info msg="RemovePodSandbox for \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" Oct 31 01:15:13.677630 env[1316]: time="2025-10-31T01:15:13.677452529Z" level=info msg="Forcibly stopping sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\"" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.706 [WARNING][5064] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--86687d576--r924d-eth0", GenerateName:"calico-apiserver-86687d576-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7a793cf-29da-4092-aaf4-95f63c307028", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86687d576", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2fdb71b7e0cb842cb8e171237c28bdf005478b707c7f9893a436e16146d576b2", Pod:"calico-apiserver-86687d576-r924d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac2d4a9bacb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.706 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.706 [INFO][5064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" iface="eth0" netns="" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.706 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.706 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.729 [INFO][5073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.729 [INFO][5073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.730 [INFO][5073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.736 [WARNING][5073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.736 [INFO][5073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" HandleID="k8s-pod-network.a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Workload="localhost-k8s-calico--apiserver--86687d576--r924d-eth0" Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.737 [INFO][5073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.741094 env[1316]: 2025-10-31 01:15:13.739 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43" Oct 31 01:15:13.741763 env[1316]: time="2025-10-31T01:15:13.741141044Z" level=info msg="TearDown network for sandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" successfully" Oct 31 01:15:13.893292 env[1316]: time="2025-10-31T01:15:13.893140451Z" level=info msg="RemovePodSandbox \"a1b17665232b7120056382858a1fd08faf3450b3b25b3899b71c290103c6ea43\" returns successfully" Oct 31 01:15:13.893747 env[1316]: time="2025-10-31T01:15:13.893711120Z" level=info msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.924 [WARNING][5094] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fd8js-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd0bddee-8a85-4f55-a28b-a795608cb1fb", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6", Pod:"csi-node-driver-fd8js", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie61c2935cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.924 [INFO][5094] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.924 [INFO][5094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" iface="eth0" netns="" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.924 [INFO][5094] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.924 [INFO][5094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.951 [INFO][5103] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.952 [INFO][5103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.952 [INFO][5103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.958 [WARNING][5103] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.958 [INFO][5103] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.959 [INFO][5103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:13.963995 env[1316]: 2025-10-31 01:15:13.961 [INFO][5094] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:13.964641 env[1316]: time="2025-10-31T01:15:13.964021754Z" level=info msg="TearDown network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" successfully" Oct 31 01:15:13.964641 env[1316]: time="2025-10-31T01:15:13.964061290Z" level=info msg="StopPodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" returns successfully" Oct 31 01:15:13.964641 env[1316]: time="2025-10-31T01:15:13.964600420Z" level=info msg="RemovePodSandbox for \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" Oct 31 01:15:13.964754 env[1316]: time="2025-10-31T01:15:13.964666014Z" level=info msg="Forcibly stopping sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\"" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:13.999 [WARNING][5123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--fd8js-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bd0bddee-8a85-4f55-a28b-a795608cb1fb", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef3346542189330a59fc2271cdc0bcd80fe3f9b1c9be0307c90b08fe6f454d6", Pod:"csi-node-driver-fd8js", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie61c2935cb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.000 [INFO][5123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.000 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" iface="eth0" netns="" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.000 [INFO][5123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.001 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.021 [INFO][5132] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.022 [INFO][5132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.022 [INFO][5132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.027 [WARNING][5132] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.027 [INFO][5132] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" HandleID="k8s-pod-network.6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Workload="localhost-k8s-csi--node--driver--fd8js-eth0" Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.029 [INFO][5132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:14.033083 env[1316]: 2025-10-31 01:15:14.031 [INFO][5123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838" Oct 31 01:15:14.033647 env[1316]: time="2025-10-31T01:15:14.033112125Z" level=info msg="TearDown network for sandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" successfully" Oct 31 01:15:14.082808 env[1316]: time="2025-10-31T01:15:14.082726415Z" level=info msg="RemovePodSandbox \"6253fcc6f01781fb7beac3beb7069b429a82e077afcb9e1f805aa36607712838\" returns successfully" Oct 31 01:15:14.083428 env[1316]: time="2025-10-31T01:15:14.083371757Z" level=info msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.116 [WARNING][5150] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" WorkloadEndpoint="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.116 [INFO][5150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.116 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" iface="eth0" netns="" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.116 [INFO][5150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.116 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.138 [INFO][5160] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.138 [INFO][5160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.138 [INFO][5160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.144 [WARNING][5160] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.144 [INFO][5160] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.145 [INFO][5160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:14.149364 env[1316]: 2025-10-31 01:15:14.147 [INFO][5150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.150049 env[1316]: time="2025-10-31T01:15:14.149973413Z" level=info msg="TearDown network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" successfully" Oct 31 01:15:14.150049 env[1316]: time="2025-10-31T01:15:14.150018980Z" level=info msg="StopPodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" returns successfully" Oct 31 01:15:14.150560 env[1316]: time="2025-10-31T01:15:14.150534184Z" level=info msg="RemovePodSandbox for \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" Oct 31 01:15:14.150655 env[1316]: time="2025-10-31T01:15:14.150565955Z" level=info msg="Forcibly stopping sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\"" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.187 [WARNING][5178] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" WorkloadEndpoint="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.187 [INFO][5178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.187 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" iface="eth0" netns="" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.187 [INFO][5178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.187 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.209 [INFO][5187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.209 [INFO][5187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.209 [INFO][5187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.215 [WARNING][5187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.215 [INFO][5187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" HandleID="k8s-pod-network.c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Workload="localhost-k8s-whisker--c78cb78df--xltv5-eth0" Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.217 [INFO][5187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:14.221594 env[1316]: 2025-10-31 01:15:14.219 [INFO][5178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae" Oct 31 01:15:14.222102 env[1316]: time="2025-10-31T01:15:14.221637227Z" level=info msg="TearDown network for sandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" successfully" Oct 31 01:15:14.290898 env[1316]: time="2025-10-31T01:15:14.290813104Z" level=info msg="RemovePodSandbox \"c7a817ab27adbf161245e638067760ad35c68542e9ff0edad69720be6e1b49ae\" returns successfully" Oct 31 01:15:14.291505 env[1316]: time="2025-10-31T01:15:14.291449498Z" level=info msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.322 [WARNING][5205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"453498c9-0a59-4ad4-bd57-363364a2fea3", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679", Pod:"coredns-668d6bf9bc-p8xhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dc4eb3f79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.322 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.322 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" iface="eth0" netns="" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.322 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.322 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.340 [INFO][5213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.340 [INFO][5213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.340 [INFO][5213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.345 [WARNING][5213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.345 [INFO][5213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.346 [INFO][5213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:14.350018 env[1316]: 2025-10-31 01:15:14.348 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.350507 env[1316]: time="2025-10-31T01:15:14.350051090Z" level=info msg="TearDown network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" successfully" Oct 31 01:15:14.350507 env[1316]: time="2025-10-31T01:15:14.350081888Z" level=info msg="StopPodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" returns successfully" Oct 31 01:15:14.350711 env[1316]: time="2025-10-31T01:15:14.350658409Z" level=info msg="RemovePodSandbox for \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" Oct 31 01:15:14.350768 env[1316]: time="2025-10-31T01:15:14.350706019Z" level=info msg="Forcibly stopping sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\"" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.380 [WARNING][5230] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"453498c9-0a59-4ad4-bd57-363364a2fea3", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 14, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd955f866d47fad52d0a86b1aa02bce2ee1268b94cbf305dd4e007ee58c8f679", Pod:"coredns-668d6bf9bc-p8xhs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7dc4eb3f79c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.380 [INFO][5230] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.380 [INFO][5230] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" iface="eth0" netns="" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.380 [INFO][5230] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.380 [INFO][5230] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.397 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.397 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.397 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.402 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.402 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" HandleID="k8s-pod-network.4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Workload="localhost-k8s-coredns--668d6bf9bc--p8xhs-eth0" Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.404 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:15:14.407363 env[1316]: 2025-10-31 01:15:14.405 [INFO][5230] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0" Oct 31 01:15:14.407363 env[1316]: time="2025-10-31T01:15:14.407312758Z" level=info msg="TearDown network for sandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" successfully" Oct 31 01:15:14.410960 env[1316]: time="2025-10-31T01:15:14.410900034Z" level=info msg="RemovePodSandbox \"4c359beb4f8ec8f908fe9d4402c54b29b0240b5e569583f1b61e012ab32f2ee0\" returns successfully" Oct 31 01:15:15.488691 env[1316]: time="2025-10-31T01:15:15.488636812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:15.827561 env[1316]: time="2025-10-31T01:15:15.827398240Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:15.856235 env[1316]: time="2025-10-31T01:15:15.856128237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:15.856455 kubelet[2122]: E1031 01:15:15.856407 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:15.856852 kubelet[2122]: E1031 01:15:15.856480 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:15.856852 kubelet[2122]: E1031 01:15:15.856668 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9jfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-r924d_calico-apiserver(b7a793cf-29da-4092-aaf4-95f63c307028): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:15.857877 kubelet[2122]: E1031 01:15:15.857828 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:16.487166 env[1316]: time="2025-10-31T01:15:16.487090851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:15:16.952640 env[1316]: time="2025-10-31T01:15:16.952548420Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:17.014346 env[1316]: time="2025-10-31T01:15:17.014251370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:15:17.014646 kubelet[2122]: E1031 01:15:17.014560 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:17.014986 kubelet[2122]: E1031 01:15:17.014653 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:17.014986 kubelet[2122]: E1031 01:15:17.014808 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85445fc7bc-269qr_calico-system(cbcd2bd9-2395-4730-b047-aac75539fb47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:17.016005 kubelet[2122]: E1031 01:15:17.015977 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:17.487650 env[1316]: time="2025-10-31T01:15:17.487557372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:15:17.966427 env[1316]: time="2025-10-31T01:15:17.966371812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:18.049356 env[1316]: time="2025-10-31T01:15:18.049263998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:15:18.049573 kubelet[2122]: E1031 01:15:18.049527 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:18.049974 kubelet[2122]: E1031 01:15:18.049584 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:18.049974 kubelet[2122]: E1031 01:15:18.049793 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:18.050171 env[1316]: time="2025-10-31T01:15:18.049938944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:18.468795 env[1316]: time="2025-10-31T01:15:18.468736879Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:18.510234 env[1316]: time="2025-10-31T01:15:18.510166414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:18.510394 kubelet[2122]: E1031 01:15:18.510360 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:18.510483 kubelet[2122]: E1031 01:15:18.510430 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:18.510966 env[1316]: time="2025-10-31T01:15:18.510756506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:15:18.511054 kubelet[2122]: E1031 01:15:18.510823 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-lcpfh_calico-apiserver(7cb997cc-c908-4ddb-9523-a2aea9785811): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:18.511970 kubelet[2122]: E1031 01:15:18.511933 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:18.564758 systemd[1]: Started sshd@14-10.0.0.95:22-10.0.0.1:47368.service. Oct 31 01:15:18.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:47368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:18.566758 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 31 01:15:18.566906 kernel: audit: type=1130 audit(1761873318.563:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:47368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:18.596000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.597481 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 47368 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:18.599135 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:18.602719 systemd-logind[1300]: New session 15 of user core. Oct 31 01:15:18.603029 systemd[1]: Started session-15.scope. Oct 31 01:15:18.598000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.610699 kernel: audit: type=1101 audit(1761873318.596:483): pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.610753 kernel: audit: type=1103 audit(1761873318.598:484): pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.610772 kernel: audit: type=1006 audit(1761873318.598:485): pid=5248 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 31 01:15:18.598000 audit[5248]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d3d34d0 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:18.621776 kernel: audit: type=1300 audit(1761873318.598:485): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8d3d34d0 a2=3 a3=0 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:18.621863 kernel: audit: type=1327 audit(1761873318.598:485): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:18.598000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:18.607000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.631681 kernel: audit: type=1105 audit(1761873318.607:486): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.631720 kernel: audit: type=1103 audit(1761873318.608:487): pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.608000 audit[5251]: CRED_ACQ pid=5251 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.708021 sshd[5248]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:18.708000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.710388 systemd[1]: sshd@14-10.0.0.95:22-10.0.0.1:47368.service: Deactivated successfully. Oct 31 01:15:18.711820 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 01:15:18.711866 systemd-logind[1300]: Session 15 logged out. Waiting for processes to exit. Oct 31 01:15:18.712871 systemd-logind[1300]: Removed session 15. Oct 31 01:15:18.708000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.722639 kernel: audit: type=1106 audit(1761873318.708:488): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.722697 kernel: audit: type=1104 audit(1761873318.708:489): pid=5248 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:18.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.95:22-10.0.0.1:47368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:18.876721 env[1316]: time="2025-10-31T01:15:18.876634533Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:18.880119 env[1316]: time="2025-10-31T01:15:18.880038176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:15:18.880412 kubelet[2122]: E1031 01:15:18.880359 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:18.880504 kubelet[2122]: E1031 01:15:18.880428 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:18.881017 kubelet[2122]: E1031 01:15:18.880727 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:18.881196 env[1316]: time="2025-10-31T01:15:18.880810127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:15:18.882222 kubelet[2122]: E1031 01:15:18.882184 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:15:19.204837 env[1316]: time="2025-10-31T01:15:19.204758440Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:19.358907 env[1316]: time="2025-10-31T01:15:19.358824189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:15:19.359077 kubelet[2122]: E1031 01:15:19.359047 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:19.359402 kubelet[2122]: E1031 01:15:19.359094 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:19.359402 kubelet[2122]: E1031 01:15:19.359225 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wj6mp_calico-system(50cdc712-db7a-41da-8129-57ca3765d884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:19.360409 kubelet[2122]: E1031 01:15:19.360375 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:23.709807 systemd[1]: Started sshd@15-10.0.0.95:22-10.0.0.1:52836.service. Oct 31 01:15:23.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:23.711350 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:23.711479 kernel: audit: type=1130 audit(1761873323.709:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:23.739000 audit[5274]: USER_ACCT pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.739940 sshd[5274]: Accepted publickey for core from 10.0.0.1 port 52836 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:23.743141 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:23.742000 audit[5274]: CRED_ACQ pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.746845 systemd-logind[1300]: New session 16 of user core. Oct 31 01:15:23.747597 systemd[1]: Started session-16.scope. Oct 31 01:15:23.752603 kernel: audit: type=1101 audit(1761873323.739:492): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.752743 kernel: audit: type=1103 audit(1761873323.742:493): pid=5274 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.752762 kernel: audit: type=1006 audit(1761873323.742:494): pid=5274 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 31 01:15:23.742000 audit[5274]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1f7e0a30 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:23.763772 kernel: audit: type=1300 audit(1761873323.742:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1f7e0a30 a2=3 a3=0 items=0 ppid=1 pid=5274 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:23.763828 kernel: audit: type=1327 audit(1761873323.742:494): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:23.742000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:23.751000 audit[5274]: USER_START pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.773270 kernel: audit: type=1105 audit(1761873323.751:495): pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.773316 kernel: audit: type=1103 audit(1761873323.752:496): pid=5279 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.752000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.851455 sshd[5274]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:23.851000 audit[5274]: USER_END pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.853959 systemd[1]: sshd@15-10.0.0.95:22-10.0.0.1:52836.service: Deactivated successfully. Oct 31 01:15:23.854738 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 01:15:23.858289 systemd-logind[1300]: Session 16 logged out. Waiting for processes to exit. Oct 31 01:15:23.859036 systemd-logind[1300]: Removed session 16. Oct 31 01:15:23.851000 audit[5274]: CRED_DISP pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.920389 kernel: audit: type=1106 audit(1761873323.851:497): pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.920466 kernel: audit: type=1104 audit(1761873323.851:498): pid=5274 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:23.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.95:22-10.0.0.1:52836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:24.488273 kubelet[2122]: E1031 01:15:24.488226 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:15:26.795572 kubelet[2122]: E1031 01:15:26.795541 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:28.486490 kubelet[2122]: E1031 01:15:28.486447 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:28.855730 systemd[1]: Started sshd@16-10.0.0.95:22-10.0.0.1:52844.service. Oct 31 01:15:28.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:28.857705 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:28.857774 kernel: audit: type=1130 audit(1761873328.855:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:28.891000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.892438 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:28.893968 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:28.897683 systemd-logind[1300]: New session 17 of user core. Oct 31 01:15:28.898141 systemd[1]: Started session-17.scope. Oct 31 01:15:28.893000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.906782 kernel: audit: type=1101 audit(1761873328.891:501): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.906851 kernel: audit: type=1103 audit(1761873328.893:502): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.906875 kernel: audit: type=1006 audit(1761873328.893:503): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 31 01:15:28.893000 audit[5313]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcad6a59b0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:28.918765 kernel: audit: type=1300 audit(1761873328.893:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcad6a59b0 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:28.918808 kernel: audit: type=1327 audit(1761873328.893:503): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:28.893000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:28.901000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.929358 kernel: audit: type=1105 audit(1761873328.901:504): pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.929406 kernel: audit: type=1103 audit(1761873328.903:505): pid=5316 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.903000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.999572 sshd[5313]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:28.999000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:29.001945 systemd[1]: sshd@16-10.0.0.95:22-10.0.0.1:52844.service: Deactivated successfully. Oct 31 01:15:29.003256 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 01:15:29.007334 systemd-logind[1300]: Session 17 logged out. Waiting for processes to exit. Oct 31 01:15:29.008300 systemd-logind[1300]: Removed session 17. Oct 31 01:15:28.999000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:29.015334 kernel: audit: type=1106 audit(1761873328.999:506): pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:29.015401 kernel: audit: type=1104 audit(1761873328.999:507): pid=5313 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:28.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.95:22-10.0.0.1:52844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:29.487966 kubelet[2122]: E1031 01:15:29.487919 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:30.486635 kubelet[2122]: E1031 01:15:30.486561 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:32.487141 kubelet[2122]: E1031 01:15:32.487104 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:32.487644 kubelet[2122]: E1031 01:15:32.487535 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:32.488162 kubelet[2122]: E1031 01:15:32.488053 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:15:34.003236 systemd[1]: Started sshd@17-10.0.0.95:22-10.0.0.1:57140.service. Oct 31 01:15:34.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:57140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:34.004777 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:34.004834 kernel: audit: type=1130 audit(1761873334.002:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:57140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:34.032000 audit[5327]: USER_ACCT pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.033162 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 57140 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:34.035106 sshd[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:34.039263 systemd-logind[1300]: New session 18 of user core. Oct 31 01:15:34.034000 audit[5327]: CRED_ACQ pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.040004 systemd[1]: Started session-18.scope. Oct 31 01:15:34.045535 kernel: audit: type=1101 audit(1761873334.032:510): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.045601 kernel: audit: type=1103 audit(1761873334.034:511): pid=5327 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.045650 kernel: audit: type=1006 audit(1761873334.034:512): pid=5327 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Oct 31 01:15:34.034000 audit[5327]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffced215d30 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:34.055955 kernel: audit: type=1300 audit(1761873334.034:512): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffced215d30 a2=3 a3=0 items=0 ppid=1 pid=5327 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:34.056019 kernel: audit: type=1327 audit(1761873334.034:512): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:34.034000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:34.044000 audit[5327]: USER_START pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.065247 kernel: audit: type=1105 audit(1761873334.044:513): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.065314 kernel: audit: type=1103 audit(1761873334.045:514): pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.045000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.173431 sshd[5327]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:34.173000 audit[5327]: USER_END pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.176048 systemd[1]: sshd@17-10.0.0.95:22-10.0.0.1:57140.service: Deactivated successfully. Oct 31 01:15:34.176846 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 01:15:34.180219 systemd-logind[1300]: Session 18 logged out. Waiting for processes to exit. Oct 31 01:15:34.180872 systemd-logind[1300]: Removed session 18. Oct 31 01:15:34.173000 audit[5327]: CRED_DISP pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.187724 kernel: audit: type=1106 audit(1761873334.173:515): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.187780 kernel: audit: type=1104 audit(1761873334.173:516): pid=5327 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:34.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.95:22-10.0.0.1:57140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:36.486403 kubelet[2122]: E1031 01:15:36.486346 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:37.486358 kubelet[2122]: E1031 01:15:37.486301 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:37.487712 env[1316]: time="2025-10-31T01:15:37.487654815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:15:37.826034 env[1316]: time="2025-10-31T01:15:37.825883642Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:37.827162 env[1316]: time="2025-10-31T01:15:37.827091946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:15:37.827409 kubelet[2122]: E1031 01:15:37.827353 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:15:37.827786 kubelet[2122]: E1031 01:15:37.827414 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:15:37.827786 kubelet[2122]: E1031 01:15:37.827522 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8c280e58b5284c02a79bc96b4b32937d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:37.829923 env[1316]: time="2025-10-31T01:15:37.829889662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:15:38.158432 env[1316]: time="2025-10-31T01:15:38.158249575Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:38.159514 env[1316]: time="2025-10-31T01:15:38.159459251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:15:38.159811 kubelet[2122]: E1031 01:15:38.159750 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:15:38.159886 kubelet[2122]: E1031 01:15:38.159816 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:15:38.159965 kubelet[2122]: E1031 01:15:38.159928 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gq2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-86b4655b9-f4c4n_calico-system(9f314ab5-dad4-417f-bff7-f3843175cd3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:38.161240 kubelet[2122]: E1031 01:15:38.161175 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:15:39.177019 systemd[1]: Started sshd@18-10.0.0.95:22-10.0.0.1:57148.service. Oct 31 01:15:39.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:39.179082 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:39.179153 kernel: audit: type=1130 audit(1761873339.175:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:39.208000 audit[5342]: USER_ACCT pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.210542 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:39.212546 sshd[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:39.210000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.218529 systemd-logind[1300]: New session 19 of user core. Oct 31 01:15:39.218959 systemd[1]: Started session-19.scope. Oct 31 01:15:39.224647 kernel: audit: type=1101 audit(1761873339.208:519): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.224767 kernel: audit: type=1103 audit(1761873339.210:520): pid=5342 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.224797 kernel: audit: type=1006 audit(1761873339.210:521): pid=5342 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Oct 31 01:15:39.210000 audit[5342]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16fedf90 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:39.236990 kernel: audit: type=1300 audit(1761873339.210:521): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd16fedf90 a2=3 a3=0 items=0 ppid=1 pid=5342 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:39.237122 kernel: audit: type=1327 audit(1761873339.210:521): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:39.210000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:39.222000 audit[5342]: USER_START pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.247774 kernel: audit: type=1105 audit(1761873339.222:522): pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.247820 kernel: audit: type=1103 audit(1761873339.223:523): pid=5345 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.223000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.358968 sshd[5342]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:39.358000 audit[5342]: USER_END pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.362283 systemd[1]: Started sshd@19-10.0.0.95:22-10.0.0.1:57154.service. Oct 31 01:15:39.363037 systemd[1]: sshd@18-10.0.0.95:22-10.0.0.1:57148.service: Deactivated successfully. Oct 31 01:15:39.365811 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 01:15:39.367760 systemd-logind[1300]: Session 19 logged out. Waiting for processes to exit. Oct 31 01:15:39.358000 audit[5342]: CRED_DISP pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.368952 systemd-logind[1300]: Removed session 19. Oct 31 01:15:39.374802 kernel: audit: type=1106 audit(1761873339.358:524): pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.374891 kernel: audit: type=1104 audit(1761873339.358:525): pid=5342 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.95:22-10.0.0.1:57154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:39.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.95:22-10.0.0.1:57148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:39.395000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.397048 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 57154 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:39.396000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.396000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9e8f8110 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:39.396000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:39.398069 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:39.401505 systemd-logind[1300]: New session 20 of user core. Oct 31 01:15:39.402323 systemd[1]: Started session-20.scope. Oct 31 01:15:39.404000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.406000 audit[5359]: CRED_ACQ pid=5359 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:39.487153 kubelet[2122]: E1031 01:15:39.486975 2122 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 01:15:40.424676 sshd[5354]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:40.424000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:40.424000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:40.427775 systemd[1]: Started sshd@20-10.0.0.95:22-10.0.0.1:52796.service. Oct 31 01:15:40.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.95:22-10.0.0.1:52796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:40.428976 systemd[1]: sshd@19-10.0.0.95:22-10.0.0.1:57154.service: Deactivated successfully. Oct 31 01:15:40.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.95:22-10.0.0.1:57154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:40.430374 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 01:15:40.430960 systemd-logind[1300]: Session 20 logged out. Waiting for processes to exit. Oct 31 01:15:40.431852 systemd-logind[1300]: Removed session 20. Oct 31 01:15:40.459000 audit[5366]: USER_ACCT pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:40.460999 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 52796 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:40.460000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:40.460000 audit[5366]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea5f36b10 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:40.460000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:40.462349 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:40.465940 systemd-logind[1300]: New session 21 of user core. Oct 31 01:15:40.466914 systemd[1]: Started session-21.scope. Oct 31 01:15:40.469000 audit[5366]: USER_START pid=5366 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:40.470000 audit[5371]: CRED_ACQ pid=5371 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.039000 audit[5384]: NETFILTER_CFG table=filter:122 family=2 entries=26 op=nft_register_rule pid=5384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:41.039000 audit[5384]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff5cd19bb0 a2=0 a3=7fff5cd19b9c items=0 ppid=2275 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:41.044357 sshd[5366]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:41.044000 audit[5366]: USER_END pid=5366 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.044000 audit[5366]: CRED_DISP pid=5366 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.047606 systemd[1]: Started sshd@21-10.0.0.95:22-10.0.0.1:52806.service. Oct 31 01:15:41.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.95:22-10.0.0.1:52806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:41.049466 systemd-logind[1300]: Session 21 logged out. Waiting for processes to exit. Oct 31 01:15:41.049593 systemd[1]: sshd@20-10.0.0.95:22-10.0.0.1:52796.service: Deactivated successfully. Oct 31 01:15:41.047000 audit[5384]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=5384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:41.047000 audit[5384]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5cd19bb0 a2=0 a3=0 items=0 ppid=2275 pid=5384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.047000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:41.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.95:22-10.0.0.1:52796 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:41.050531 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 01:15:41.051062 systemd-logind[1300]: Removed session 21. Oct 31 01:15:41.060000 audit[5390]: NETFILTER_CFG table=filter:124 family=2 entries=38 op=nft_register_rule pid=5390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:41.060000 audit[5390]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe5d596110 a2=0 a3=7ffe5d5960fc items=0 ppid=2275 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:41.065000 audit[5390]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=5390 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:41.065000 audit[5390]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe5d596110 a2=0 a3=0 items=0 ppid=2275 pid=5390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.065000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:41.081000 audit[5385]: USER_ACCT pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.083970 sshd[5385]: Accepted publickey for core from 10.0.0.1 port 52806 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:41.082000 audit[5385]: CRED_ACQ pid=5385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.083000 audit[5385]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2d794d50 a2=3 a3=0 items=0 ppid=1 pid=5385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.083000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:41.084960 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:41.090653 systemd-logind[1300]: New session 22 of user core. Oct 31 01:15:41.091383 systemd[1]: Started session-22.scope. Oct 31 01:15:41.093000 audit[5385]: USER_START pid=5385 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.095000 audit[5392]: CRED_ACQ pid=5392 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.376591 sshd[5385]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:41.376000 audit[5385]: USER_END pid=5385 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.377000 audit[5385]: CRED_DISP pid=5385 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.380320 systemd[1]: Started sshd@22-10.0.0.95:22-10.0.0.1:52822.service. Oct 31 01:15:41.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.95:22-10.0.0.1:52822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:41.381237 systemd[1]: sshd@21-10.0.0.95:22-10.0.0.1:52806.service: Deactivated successfully. Oct 31 01:15:41.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.95:22-10.0.0.1:52806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:41.382434 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 01:15:41.382473 systemd-logind[1300]: Session 22 logged out. Waiting for processes to exit. Oct 31 01:15:41.383839 systemd-logind[1300]: Removed session 22. Oct 31 01:15:41.415000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.417102 sshd[5400]: Accepted publickey for core from 10.0.0.1 port 52822 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:41.416000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.416000 audit[5400]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd71f9a470 a2=3 a3=0 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:41.416000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:41.418096 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:41.421570 systemd-logind[1300]: New session 23 of user core. Oct 31 01:15:41.422345 systemd[1]: Started session-23.scope. Oct 31 01:15:41.424000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.426000 audit[5404]: CRED_ACQ pid=5404 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.487706 env[1316]: time="2025-10-31T01:15:41.487653355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:41.533153 sshd[5400]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:41.532000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.532000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:41.535784 systemd[1]: sshd@22-10.0.0.95:22-10.0.0.1:52822.service: Deactivated successfully. Oct 31 01:15:41.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.95:22-10.0.0.1:52822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:41.536814 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 01:15:41.537689 systemd-logind[1300]: Session 23 logged out. Waiting for processes to exit. Oct 31 01:15:41.538539 systemd-logind[1300]: Removed session 23. Oct 31 01:15:41.812107 env[1316]: time="2025-10-31T01:15:41.812028819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:41.813135 env[1316]: time="2025-10-31T01:15:41.813099819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:41.813372 kubelet[2122]: E1031 01:15:41.813325 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:41.813721 kubelet[2122]: E1031 01:15:41.813390 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:41.813721 kubelet[2122]: E1031 01:15:41.813530 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9jfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-r924d_calico-apiserver(b7a793cf-29da-4092-aaf4-95f63c307028): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:41.814769 kubelet[2122]: E1031 01:15:41.814714 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:43.487603 env[1316]: time="2025-10-31T01:15:43.487555260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:15:43.828650 env[1316]: time="2025-10-31T01:15:43.828481613Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:43.829632 env[1316]: time="2025-10-31T01:15:43.829560989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:15:43.829868 kubelet[2122]: E1031 01:15:43.829813 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:43.830126 kubelet[2122]: E1031 01:15:43.829882 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:15:43.830126 kubelet[2122]: E1031 01:15:43.830051 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zbds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wj6mp_calico-system(50cdc712-db7a-41da-8129-57ca3765d884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:43.831247 kubelet[2122]: E1031 01:15:43.831207 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:45.487862 env[1316]: time="2025-10-31T01:15:45.487569403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:15:45.841534 env[1316]: time="2025-10-31T01:15:45.841346763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:45.842659 env[1316]: time="2025-10-31T01:15:45.842576034Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:15:45.842864 kubelet[2122]: E1031 01:15:45.842826 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:45.843105 kubelet[2122]: E1031 01:15:45.842882 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:15:45.843266 env[1316]: time="2025-10-31T01:15:45.843236632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:15:45.843334 kubelet[2122]: E1031 01:15:45.843185 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnks7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85445fc7bc-269qr_calico-system(cbcd2bd9-2395-4730-b047-aac75539fb47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:45.844682 kubelet[2122]: E1031 01:15:45.844647 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:46.165453 env[1316]: time="2025-10-31T01:15:46.165368631Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:46.277532 env[1316]: time="2025-10-31T01:15:46.277422725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:15:46.277744 kubelet[2122]: E1031 01:15:46.277703 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:46.277840 kubelet[2122]: E1031 01:15:46.277761 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:15:46.277978 kubelet[2122]: E1031 01:15:46.277918 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:46.279788 env[1316]: time="2025-10-31T01:15:46.279751809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:15:46.537362 systemd[1]: Started sshd@23-10.0.0.95:22-10.0.0.1:52826.service. Oct 31 01:15:46.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.95:22-10.0.0.1:52826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:46.539379 kernel: kauditd_printk_skb: 57 callbacks suppressed Oct 31 01:15:46.539454 kernel: audit: type=1130 audit(1761873346.535:567): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.95:22-10.0.0.1:52826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:46.565000 audit[5423]: USER_ACCT pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.567496 sshd[5423]: Accepted publickey for core from 10.0.0.1 port 52826 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:46.569979 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:46.573725 systemd-logind[1300]: New session 24 of user core. Oct 31 01:15:46.568000 audit[5423]: CRED_ACQ pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.574438 systemd[1]: Started session-24.scope. Oct 31 01:15:46.581973 kernel: audit: type=1101 audit(1761873346.565:568): pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.582083 kernel: audit: type=1103 audit(1761873346.568:569): pid=5423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.582123 kernel: audit: type=1006 audit(1761873346.568:570): pid=5423 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 31 01:15:46.568000 audit[5423]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceb812920 a2=3 a3=0 items=0 ppid=1 pid=5423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:46.592781 kernel: audit: type=1300 audit(1761873346.568:570): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceb812920 a2=3 a3=0 items=0 ppid=1 pid=5423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:46.592827 kernel: audit: type=1327 audit(1761873346.568:570): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:46.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:46.578000 audit[5423]: USER_START pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.602290 kernel: audit: type=1105 audit(1761873346.578:571): pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.602362 kernel: audit: type=1103 audit(1761873346.579:572): pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.579000 audit[5426]: CRED_ACQ pid=5426 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.675787 sshd[5423]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:46.675000 audit[5423]: USER_END pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.678471 systemd[1]: sshd@23-10.0.0.95:22-10.0.0.1:52826.service: Deactivated successfully. Oct 31 01:15:46.679436 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 01:15:46.675000 audit[5423]: CRED_DISP pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.686465 systemd-logind[1300]: Session 24 logged out. Waiting for processes to exit. Oct 31 01:15:46.687372 systemd-logind[1300]: Removed session 24. Oct 31 01:15:46.691574 kernel: audit: type=1106 audit(1761873346.675:573): pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.691649 kernel: audit: type=1104 audit(1761873346.675:574): pid=5423 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:46.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.95:22-10.0.0.1:52826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:46.716420 env[1316]: time="2025-10-31T01:15:46.716380296Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:46.717543 env[1316]: time="2025-10-31T01:15:46.717493725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:15:46.717751 kubelet[2122]: E1031 01:15:46.717705 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:46.717826 kubelet[2122]: E1031 01:15:46.717759 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:15:46.718200 kubelet[2122]: E1031 01:15:46.717995 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n7fvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fd8js_calico-system(bd0bddee-8a85-4f55-a28b-a795608cb1fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:46.718359 env[1316]: time="2025-10-31T01:15:46.718046527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:15:46.719169 kubelet[2122]: E1031 01:15:46.719133 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:15:47.048586 env[1316]: time="2025-10-31T01:15:47.048497952Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:15:47.049750 env[1316]: time="2025-10-31T01:15:47.049704329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:15:47.049996 kubelet[2122]: E1031 01:15:47.049943 2122 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:47.050384 kubelet[2122]: E1031 01:15:47.050006 2122 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:15:47.050384 kubelet[2122]: E1031 01:15:47.050153 2122 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gxwxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86687d576-lcpfh_calico-apiserver(7cb997cc-c908-4ddb-9523-a2aea9785811): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:15:47.051354 kubelet[2122]: E1031 01:15:47.051324 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:15:47.523000 audit[5438]: NETFILTER_CFG table=filter:126 family=2 entries=26 op=nft_register_rule pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:47.523000 audit[5438]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffa54a49d0 a2=0 a3=7fffa54a49bc items=0 ppid=2275 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:47.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:47.530000 audit[5438]: NETFILTER_CFG table=nat:127 family=2 entries=104 op=nft_register_chain pid=5438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 01:15:47.530000 audit[5438]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffa54a49d0 a2=0 a3=7fffa54a49bc items=0 ppid=2275 pid=5438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:47.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 01:15:51.680916 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 01:15:51.681033 kernel: audit: type=1130 audit(1761873351.678:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.95:22-10.0.0.1:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:51.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.95:22-10.0.0.1:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:51.678872 systemd[1]: Started sshd@24-10.0.0.95:22-10.0.0.1:47642.service. Oct 31 01:15:51.708000 audit[5441]: USER_ACCT pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.709283 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 47642 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:51.716643 kernel: audit: type=1101 audit(1761873351.708:579): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.716683 kernel: audit: type=1103 audit(1761873351.716:580): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.716000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.716977 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:51.722217 systemd[1]: Started session-25.scope. Oct 31 01:15:51.722403 systemd-logind[1300]: New session 25 of user core. Oct 31 01:15:51.728313 kernel: audit: type=1006 audit(1761873351.716:581): pid=5441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Oct 31 01:15:51.728383 kernel: audit: type=1300 audit(1761873351.716:581): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe564be320 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:51.716000 audit[5441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe564be320 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:51.716000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:51.738049 kernel: audit: type=1327 audit(1761873351.716:581): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:51.738105 kernel: audit: type=1105 audit(1761873351.729:582): pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.729000 audit[5441]: USER_START pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.745510 kernel: audit: type=1103 audit(1761873351.731:583): pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.731000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.834063 sshd[5441]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:51.834000 audit[5441]: USER_END pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.836850 systemd[1]: sshd@24-10.0.0.95:22-10.0.0.1:47642.service: Deactivated successfully. Oct 31 01:15:51.838045 systemd-logind[1300]: Session 25 logged out. Waiting for processes to exit. Oct 31 01:15:51.838112 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 01:15:51.839549 systemd-logind[1300]: Removed session 25. Oct 31 01:15:51.834000 audit[5441]: CRED_DISP pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.849308 kernel: audit: type=1106 audit(1761873351.834:584): pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.849359 kernel: audit: type=1104 audit(1761873351.834:585): pid=5441 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:51.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.95:22-10.0.0.1:47642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:52.487801 kubelet[2122]: E1031 01:15:52.487756 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-86b4655b9-f4c4n" podUID="9f314ab5-dad4-417f-bff7-f3843175cd3e" Oct 31 01:15:55.487653 kubelet[2122]: E1031 01:15:55.487576 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-r924d" podUID="b7a793cf-29da-4092-aaf4-95f63c307028" Oct 31 01:15:56.486881 kubelet[2122]: E1031 01:15:56.486843 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85445fc7bc-269qr" podUID="cbcd2bd9-2395-4730-b047-aac75539fb47" Oct 31 01:15:56.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.95:22-10.0.0.1:47648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:56.837648 systemd[1]: Started sshd@25-10.0.0.95:22-10.0.0.1:47648.service. Oct 31 01:15:56.839288 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:15:56.839349 kernel: audit: type=1130 audit(1761873356.837:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.95:22-10.0.0.1:47648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:56.870000 audit[5478]: USER_ACCT pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.870990 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 47648 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:15:56.873340 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:15:56.872000 audit[5478]: CRED_ACQ pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.877300 systemd-logind[1300]: New session 26 of user core. Oct 31 01:15:56.878394 systemd[1]: Started session-26.scope. Oct 31 01:15:56.883773 kernel: audit: type=1101 audit(1761873356.870:588): pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.883853 kernel: audit: type=1103 audit(1761873356.872:589): pid=5478 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.887832 kernel: audit: type=1006 audit(1761873356.872:590): pid=5478 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Oct 31 01:15:56.887897 kernel: audit: type=1300 audit(1761873356.872:590): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff500fad40 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:56.872000 audit[5478]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff500fad40 a2=3 a3=0 items=0 ppid=1 pid=5478 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:15:56.872000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:56.896975 kernel: audit: type=1327 audit(1761873356.872:590): proctitle=737368643A20636F7265205B707269765D Oct 31 01:15:56.897029 kernel: audit: type=1105 audit(1761873356.883:591): pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.883000 audit[5478]: USER_START pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.904224 kernel: audit: type=1103 audit(1761873356.884:592): pid=5481 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.884000 audit[5481]: CRED_ACQ pid=5481 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.994438 sshd[5478]: pam_unix(sshd:session): session closed for user core Oct 31 01:15:56.994000 audit[5478]: USER_END pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.997032 systemd[1]: sshd@25-10.0.0.95:22-10.0.0.1:47648.service: Deactivated successfully. Oct 31 01:15:56.997985 systemd-logind[1300]: Session 26 logged out. Waiting for processes to exit. Oct 31 01:15:56.998119 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 01:15:56.999156 systemd-logind[1300]: Removed session 26. Oct 31 01:15:56.994000 audit[5478]: CRED_DISP pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:57.009688 kernel: audit: type=1106 audit(1761873356.994:593): pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:57.009780 kernel: audit: type=1104 audit(1761873356.994:594): pid=5478 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:15:56.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.95:22-10.0.0.1:47648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:15:57.486936 kubelet[2122]: E1031 01:15:57.486866 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wj6mp" podUID="50cdc712-db7a-41da-8129-57ca3765d884" Oct 31 01:15:58.487977 kubelet[2122]: E1031 01:15:58.487924 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fd8js" podUID="bd0bddee-8a85-4f55-a28b-a795608cb1fb" Oct 31 01:16:00.488779 kubelet[2122]: E1031 01:16:00.488733 2122 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86687d576-lcpfh" podUID="7cb997cc-c908-4ddb-9523-a2aea9785811" Oct 31 01:16:01.998004 systemd[1]: Started sshd@26-10.0.0.95:22-10.0.0.1:56078.service. Oct 31 01:16:01.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.95:22-10.0.0.1:56078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:16:02.007653 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 01:16:02.007876 kernel: audit: type=1130 audit(1761873361.997:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.95:22-10.0.0.1:56078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:16:02.033000 audit[5493]: USER_ACCT pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.040000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.041304 sshd[5493]: Accepted publickey for core from 10.0.0.1 port 56078 ssh2: RSA SHA256:BzWaVf4M0LrLtWllQvHpK+M/9x+T9duV7gwz9J5cQAA Oct 31 01:16:02.041658 kernel: audit: type=1101 audit(1761873362.033:597): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.041692 kernel: audit: type=1103 audit(1761873362.040:598): pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.041406 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:16:02.051784 systemd[1]: Started session-27.scope. Oct 31 01:16:02.051851 systemd-logind[1300]: New session 27 of user core. Oct 31 01:16:02.052732 kernel: audit: type=1006 audit(1761873362.040:599): pid=5493 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Oct 31 01:16:02.040000 audit[5493]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa8ac6040 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:16:02.064645 kernel: audit: type=1300 audit(1761873362.040:599): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa8ac6040 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 01:16:02.064773 kernel: audit: type=1327 audit(1761873362.040:599): proctitle=737368643A20636F7265205B707269765D Oct 31 01:16:02.040000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 01:16:02.064000 audit[5493]: USER_START pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.074036 kernel: audit: type=1105 audit(1761873362.064:600): pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.065000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.081439 kernel: audit: type=1103 audit(1761873362.065:601): pid=5496 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.212289 sshd[5493]: pam_unix(sshd:session): session closed for user core Oct 31 01:16:02.212000 audit[5493]: USER_END pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.215104 systemd[1]: sshd@26-10.0.0.95:22-10.0.0.1:56078.service: Deactivated successfully. Oct 31 01:16:02.215844 systemd[1]: session-27.scope: Deactivated successfully. Oct 31 01:16:02.228443 kernel: audit: type=1106 audit(1761873362.212:602): pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.228512 kernel: audit: type=1104 audit(1761873362.213:603): pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.213000 audit[5493]: CRED_DISP pid=5493 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 31 01:16:02.228639 systemd-logind[1300]: Session 27 logged out. Waiting for processes to exit. Oct 31 01:16:02.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.95:22-10.0.0.1:56078 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 01:16:02.229454 systemd-logind[1300]: Removed session 27.